Implementation of "A Self-Supervised Approach on Motion Calibration for Enhancing Physical Plausibility in Text-to-Motion".
This repository is built upon the codebase of momask-codes.
The codebase has been tested with the following setup:
- Python 3.9.21
- CUDA 12.2
- PyTorch 2.1.0
Create and activate a dedicated conda environment, then install dependencies:
conda env create --name dmc python=3.9
conda activate dmc
pip install -r requirements.txtDownload the HumanML3D dataset by following the official guidelines provided in the repository below, and place it under:
./dataset/HumanML3D
- HumanML3D repository: https://github.com/EricGuo5513/HumanML3D
We additionally provide our own mean and standard deviation statistics. These files must also be placed inside the ./dataset/HumanML3D directory.
You can download the pretrained Distortion-aware Motion Calibrator (DMC) models using:
bash prepare/download_models.shIf the script fails, the models can be downloaded manually from the following link:
Download the required evaluation models and GloVe embeddings:
bash prepare/download_evaluator.sh
bash prepare/download_glove.shpython train_dmc_wgan.py --name wgan --config_path ./config/wgan.yamlpython train_dmc_denoising.py --name denoising --config_path ./config/denoising.yamlpython eval_dmc_wgan.py --name dmc_wgan --config_path ./config/wgan.yaml --save_animpython eval_dmc_denoising.py --name dmc_denoising --config_path ./config/denoising.yaml --save_animMotion results are rendered using Isaac Sim.
To visualize 3D joint sequences in Isaac Sim, we use the following renderer:
This renderer converts motion outputs into Isaac Sim-compatible visualizations.
This project builds upon the implementation and dataset ecosystem provided by: