HyperAIHyperAI

Command Palette

Search for a command to run...

UTD-MHAD Human Action Recognition Dataset

Date

3 years ago

Size

1.17 GB

Organization

University of Texas at Dallas

License

Other

Featured Image

UTD stands for University of Texas at Dallas, and MHAD stands for Multimodal Human Action Dataset. This dataset consists of videos of 27 actions of 8 subjects. Each subject repeated an action 4 times, resulting in a total of 861 action sequences (3 action sequences were deleted due to damage). The dataset has four time-synchronized data modes, namely RGB video, depth video, skeleton position, inertial signals from Kinect cameras and wearable inertial sensors.

This dataset can be used to study fusion methods, similar to the method used in the dataset to combine depth camera data and inertial sensor data. It can also be used for multimodal research in the field of human action recognition.

UTD-MHAD.torrent
Seeding 3Downloading 0Completed 663Total Downloads 1,078
  • UTD-MHAD/
    • README.md
      1.41 KB
    • README.txt
      2.83 KB
      • data/
        • Depth.zip
          120.6 MB
        • Inertial.zip
          125.7 MB
        • RGB.zip
          1.15 GB
        • Sample_Code.zip
          1.15 GB
        • Skeleton.zip
          1.17 GB

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
UTD-MHAD Human Action Recognition Dataset | Datasets | HyperAI