This as an official implementation of our CVPR 2025 paper BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting.
# create a conda environment
conda create -n BARD-GS -y python=3.10
conda activate BARD-GS
# install dependencies
pip install --upgrade pip setuptools
pip install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
# install nerfstudio
pip install nerfstudio==1.0.3Then you can clone this repo and install:
git clone https://github.com/luyr/BARD-GS.git
cd BARD-GS
pip install -e .Download processed dataset from this link.
To be released
-
For
BARD-GS Real Worlddataset, train with:ns-train BARD-GS \ --data <path to dataset>/card/ \ --experiment_name <your exp name> \ --vis wandb bard-gs-data;
-
For
Dychecksynthetic motion-blurdataset :ns-train BARD-GS \ --data <path to dataset>/paper-windmill/ \ --experiment_name <your exp name> \ --vis wandb deblur-gs-data;
ns-render dataset \
--load-config <your training config> \
--split test \
--rendered-output-names rgb \
--output-path ./renders/<your exp name> \
--image-format pngOpen this repo with your IDE, create a configuration, and set the executing python script path to
<nerfstudio_path>/nerfstudio/scripts/train.py, with the parameters above.
- BARD-GS real world dataset
- Training script
- Dycheck synthetic dataset
- Evaluation script
- Preprocessing script
If you find this useful, please consider citing:
@inproceedings{lu2025bard,
title={Bard-gs: Blur-aware reconstruction of dynamic scenes via gaussian splatting},
author={Lu, Yiren and Zhou, Yunlai and Liu, Disheng and Liang, Tuo and Yin, Yu},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={16532--16542},
year={2025}
}