Swift4D: Adaptive divide-and-conquer Gaussian Splatting for compact and efficient reconstruction of dynamic scene
Jiahao Wu , Rui Peng , Zhiyan Wang, Lu Xiao,
Luyang Tang, Kaiqiang Xiong, Ronggang Wang ✉
ICLR 2025 Paper
Please follow the 3D-GS to install the relative packages.
git clone --recursive https://github.com/WuJH2001/swift4d.git
cd swift4d
conda create -n swift4d python=3.8
conda activate swift4d
pip install -r requirements.txt
pip install submodules/diff-gaussian-rasterization
pip install submodules/simple-knnAfter that, you need to install tiny-cuda-nn.
For real dynamic scenes:
Plenoptic Dataset could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows.
├── data
│ | dynerf
│ ├── cook_spinach
│ ├── cam00
│ ├── images
│ ├── 0000.png
│ ├── 0001.png
│ ├── 0002.png
│ ├── ...
│ ├── cam01
│ ├── images
│ ├── 0000.png
│ ├── 0001.png
│ ├── ...
│ ├── cut_roasted_beef
| ├── ...
For your Multi-view dynamic scenes:
You may need to follow 3DGSTream
For training dynerf scenes such as cut_roasted_beef, run
# First, extract the frames of each video.
python scripts/preprocess_dynerf.py --datadir data/dynerf/cut_roasted_beef
# Second, generate point clouds from input data.
bash colmap.sh data/dynerf/cut_roasted_beef llff
# Third, downsample the point clouds generated in the second step.
python scripts/downsample_point.py data/dynerf/cut_roasted_beef/colmap/dense/workspace/fused.ply data/dynerf/cut_roasted_beef/points3D_downsample2.ply
# Finally, train.
python train.py -s data/dynerf/cut_roasted_beef --port 6017 --expname "dynerf/cut_roasted_beef" --configs arguments/dynerf/cut_roasted_beef.py Run the following script to render the images.
python render.py --model_path output/dynerf/cut_roasted_beef --skip_train --skip_video --iteration 13000 --configs arguments/dynerf/cut_roasted_beef.py
You can just run the following script to evaluate the model.
python metrics.py --model_path output/dynerf/coffee_martini/
You can find our dynerf models here. The VRU Basketball dataset we used in the paper can be found here. You can also download our VRU Basketball dataset from 🤗 Hugging Face here. Feel free to use it for training your model or validating your method!
This project is still under development. Please feel free to raise issues or submit pull requests to contribute to our codebase. Thanks to 4DGS
@inproceedings{wuswift4d,
title={Swift4D: Adaptive divide-and-conquer Gaussian Splatting for compact and efficient reconstruction of dynamic scene},
author={Wu, Jiahao and Peng, Rui and Wang, Zhiyan and Xiao, Lu and Tang, Luyang and Yan, Jinbo and Xiong, Kaiqiang and Wang, Ronggang},
booktitle={The Thirteenth International Conference on Learning Representations}
}

