SMPL-Pose: Adaptive graph transformer with llm priors for 3D human reconstruction
SMPL-Pose is a cutting-edge method for monocular 3D human shape and pose estimation. Leveraging the power of transformers, it provides accurate and efficient solutions for 3D human modeling tasks. This repository contains the code, datasets, and instructions for using SMPL-Pose.
#
- Testing: Most modern GPUs are sufficient to run the testing process.
conda create -n SMPL-Pose python=3.8
conda activate SMPL-Posepip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install scipy==1.5.0 scikit-image==0.19.1 opencv-python==4.5.4.58 imageio matplotlib numpy==1.20.3 chumpy==0.70 ipython ipykernel ipdb smplx==0.1.28 tensorboardx==2.4 tensorboard==2.7.0 easydict pillow==8.4.0conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
wget https://anaconda.org/pytorch3d/pytorch3d/0.5.0/download/linux-64/pytorch3d-0.5.0-py38_cu111_pyt180.tar.bz2 --no-check-certificate
conda install pytorch3d-0.5.0-py38_cu111_pyt180.tar.bz2
rm pytorch3d-0.5.0-py38_cu111_pyt180.tar.bz2- Download the meta data and extract it into
PATH_to_SMPL-Pose/meta_data. - Download the pretrained models and extract it into
PATH_to_SMPL-Pose/pretrained.
python demo.py --img_path samples/im01.pngThere are two ways to download the datasets:
- Recommended (faster): Use
azcopy.- Download
azcopyfrom [here](Please replace with the actual download link). - Navigate to the directory where you want to store the dataset:
cd PATH_to_STORE_DATASET - Set the
azcopypath:azcopy_path=PATH_to_AZCOPY - Run the download script:
bash PATH_to_SMPL-Pose/scripts/download_datasets_azcopy.sh - Create a symbolic link:
cd PATH_to_SMPL-Pose && ln -s PATH_to_STORE_DATASET ./datasets
- Download
- Alternative: Use
wget(usually slower and less stable, but no dependency onazcopy).- Navigate to the dataset storage directory:
cd PATH_to_STORE_DATASET - Run the download script:
bash PATH_to_SMPL-Pose/scripts/download_datasets_wget.sh
- Navigate to the dataset storage directory:
- Test on H36M dataset
- For SMPL-Pose:
python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --eval_only --val_batch_size=128 --model_type=SMPL-Pose --data_mode=h36m --hrnet_type=w32 --load_checkpoint=pretrained/SMPL-Pose_h36m.pt - For SMPL-Pose:
python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --eval_only --val_batch_size=128 --model_type=SMPL-Pose --data_mode=h36m --hrnet_type=w48 --load_checkpoint=pretrained/SMPL-Pose-L_h36m.pt - Test on 3DPW dataset
- For SMPL-Pose:
python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --eval_only --val_batch_size=128 --model_type=SMPL-Pose --data_mode=3dpw --hrnet_type=w32 --load_checkpoint=pretrained/SMPL-Pose_3dpw.pt - For SMPL-Pose-L:
python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --eval_only --val_batch_size=128 --model_type=SMPL-Pose --data_mode=3dpw --hrnet_type=w48 --load_checkpoint=pretrained/SMPL-Pose-L_3dpw.pt - For SMPL-Pose:
- Train CNN backbone on mixed data:
python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --exp_name=backbone --batch_size=100 --num_workers=8 --lr=2e-4 --data_mode=h36m --model_type=backbone --num_epochs=50 --hrnet_type=w32 2. **Train SMPL-Pose on mixed data**:
python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --exp_name=SMPL-Pose --batch_size=100 --num_workers=8 --lr=2e-4 --data_mode=h36m --model_type=SMPL-Pose --num_epochs=100 --hrnet_type=w32 --load_checkpoint=logs/backbone/checkpoints/epoch_049.pt3. **Finetune SMPL-Pose on 3DPW**:
python -m torch.distributed.launch --nproc_per_node=1 --use_env main.py --exp_name=SMPL-Pose_3dpw --batch_size=32 --num_workers=8 --lr=1e-4 --data_mode=3dpw --model_type=SMPL-Pose --num_epochs=2 --hrnet_type=w32 --load_checkpoint=logs/SMPL-Pose/checkpoints/epoch_***.pt --summary_steps=100- For SMPL-Pose-L:
- Train CNN backbone on mixed data:
python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --exp_name=backbone-L --batch_size=100 --num_workers=8 --lr=2e-4 --data_mode=h36m --model_type=backbone --num_epochs=50 --hrnet_type=w48 2. **Train SMPL-Pose-L on mixed data**:
python -m torch.distributed.launch --nproc_per_node=2 --use_env main.py --exp_name=SMPL-Pose-L --batch_size=100 --num_workers=8 --lr=2e-4 --data_mode=h36m --model_type=SMPL-Pose --num_epochs=100 --hrnet_type=w48 --load_checkpoint=logs/backbone-L/checkpoints/epoch_049.pt3. **Finetune SMPL-Pose-L on 3DPW**:
python -m torch.distributed.launch --nproc_per_node=1 --use_env main.py --exp_name=SMPL-Pose-L_3dpw --batch_size=32 --num_workers=8 --lr=1e-4 --data_mode=3dpw --model_type=SMPL-Pose --num_epochs=2 --hrnet_type=w48 --load_checkpoint=logs/SMPL-Pose-L/checkpoints/epoch_***.pt --summary_steps=100
## 7. Related Resources
Explore these related resources to deepen your understanding of 3D human modeling:
- METRO(https://github.com/isarandi/metro‑pose3d)
- GP-NeRF(https://github.com/zyqz97/GP‑NeRF)