UTD-MHAD Human Action Recognition Dataset
Date
Size
Publish URL
Paper URL
License
Other

UTD stands for University of Texas at Dallas, and MHAD stands for Multimodal Human Action Dataset. This dataset consists of videos of 27 actions of 8 subjects. Each subject repeated an action 4 times, resulting in a total of 861 action sequences (3 action sequences were deleted due to damage). The dataset has four time-synchronized data modes, namely RGB video, depth video, skeleton position, inertial signals from Kinect cameras and wearable inertial sensors.
This dataset can be used to study fusion methods, similar to the method used in the dataset to combine depth camera data and inertial sensor data. It can also be used for multimodal research in the field of human action recognition.
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.