HumanML3D is a 3D human motion-language dataset that originates from a combination of HumanAct12 and Amass dataset.
It covers a broad range of human actions such as daily activities (e.g., 'walking', 'jumping'), sports (e.g., 'swimming', 'playing golf'), acrobatics (e.g., 'cartwheel') and artistry (e.g., 'dancing').
AMASS (Archive of Motion Capture as Surface Shapes) is a large-scale motion capture dataset that unifies a number of existing motion capture datasets into a common format. It provides a comprehensive collection of human motion data
Related content:
KIT Motion-Language Dataset (KIT-ML) is also a related dataset that contains 3,911 motions and 6,278 descriptions. We processed KIT-ML dataset following the same procedures of HumanML3D dataset, and provide the access in this repository. However, if you would like to use KIT-ML dataset, please remember to cite the original paper.
If this dataset is usefule in your projects, we will apprecite your star on this codebase. 😆😆
🙆♀️ T2M - The first work on HumanML3D that learns to generate 3D motion from textual descriptions, with temporal VAE.
🏃 TM2T - Learns the mutual mapping between texts and motions through the discrete motion token.
💃 TM2D - Generates dance motions with text instruction.
🐝 MoMask - New-level text2motion generation using residual VQ and generative masked modeling.
For KIT-ML dataset, you could directly download [Here]. Due to the distribution policy of AMASS dataset, we are not allowed to distribute the data directly. We provide a series of script that could reproduce our HumanML3D dataset from AMASS dataset.
You need to clone this repository and install the virtual environment.
[2022/12/15] Update: Installing matplotlib=3.3.4 could prevent small deviation of the generated data from reference data. See Issue
Conda environment is needed at first, then run the following command to install required packages. Lastly, you need to run the scripts in order to obtain HumanML3D dataset.
# Create conda environment
conda create -n humanml3d python=3.7 -y
conda activate humanml3d
# Install required packages
bash install_env.sh
# Run script to generate HumanML3D dataset
python pose_data_generator.py
# Play the animation to check the data
python pose_data_animation.pyNote
This repository is refined by the author, may be slightly different from the original one. If you want to look into the original codebase, please visit here
