This repo is the refactored python training and inference code for InstantSplat. Forked from commit 2c5006d41894d06464da53d5495300860f432872. We refactored the original code following the standard Python package structure, while keeping the algorithms used in the code identical to the original version.
Initialization methods:
- DUST3R (same method used in InstantSplat)
- MAST3R (same method used in Splatt3R)
- COLMAP Sparse reconstruct (same method used in gaussian-splatting)
- COLMAP Dense reconstruct (use
patch_match_stereo,stereo_fusion,poisson_mesheranddelaunay_mesherin COLMAP to reconstruct dense point cloud for initialization) - Masking of keypoints during COLMAP feature extraction (just put your mask into
maskfolder, e.g. for an imagedata/xxx/input/012.jpg, the mask would bedata/xxx/input_mask/012.jpg.png)
- Pytorch (v2.4 or higher recommended)
- CUDA Toolkit (12.4 recommended, should match with PyTorch version)
Install a colmap executable, e.g. using conda:
conda install conda-forge::colmap(Optional) Install xformers for faster depth anything:
pip install xformers(Optional) If you have trouble with gaussian-splatting, try to install it from source:
pip install wheel setuptools
pip install --upgrade git+https://github.com/yindaheng98/gaussian-splatting.git@master --no-build-isolationpip install --upgrade instantsplator build latest from source:
pip install wheel setuptools
pip install --upgrade git+https://github.com/yindaheng98/InstantSplat.git@main --no-build-isolationgit clone --recursive https://github.com/yindaheng98/InstantSplat
cd InstantSplat
pip install scipy huggingface_hub einops roma scikit-learn
pip install --target . --upgrade --no-deps .wget -P checkpoints/ https://download.europe.naverlabs.com/ComputerVision/DUSt3R/DUSt3R_ViTLarge_BaseDecoder_224_linear.pth
wget -P checkpoints/ https://download.europe.naverlabs.com/ComputerVision/DUSt3R/DUSt3R_ViTLarge_BaseDecoder_512_linear.pth
wget -P checkpoints/ https://download.europe.naverlabs.com/ComputerVision/DUSt3R/DUSt3R_ViTLarge_BaseDecoder_512_dpt.pth
wget -P checkpoints/ https://download.europe.naverlabs.com/ComputerVision/MASt3R/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric.pth
wget -P checkpoints/ https://huggingface.co/depth-anything/Depth-Anything-V2-Small/resolve/main/depth_anything_v2_vits.pth
wget -P checkpoints/ https://huggingface.co/depth-anything/Depth-Anything-V2-Base/resolve/main/depth_anything_v2_vitb.pth
wget -P checkpoints/ https://huggingface.co/depth-anything/Depth-Anything-V2-Large/resolve/main/depth_anything_v2_vitl.pth- Initialize coarse point cloud and jointly train 3DGS & cameras
# Option 1: init and train in one command
python -m instantsplat.train -s data/sora/santorini/3_views -d output/sora/santorini/3_views -i 1000 --init dust3r
# Option 2: init and train in two separate commands
python -m instantsplat.train -i dust3r -d data/sora/santorini/3_views -i dust3r # init coarse point and save as a Colmap workspace at data/sora/santorini/3_views
python -m instantsplat.train -s data/sora/santorini/3_views -d output/sora/santorini/3_views -i 1000 # train- Render it
python -m gaussian_splatting.render -s data/sora/santorini/3_views -d output/sora/santorini/3_views -i 1000 --load_camera output/sora/santorini/3_views/cameras.jsonSee .vscode\launch.json for more command examples.
See instantsplat.initialize, instantsplat.train and gaussian_splatting.render for full example.
Also check yindaheng98/gaussian-splatting for more detail of training process.
Use CameraTrainableGaussianModel in yindaheng98/gaussian-splatting
Use TrainableCameraDataset in yindaheng98/gaussian-splatting
from instantsplat.initializer import Dust3rInitializer
image_path_list = [os.path.join(image_folder, file) for file in sorted(os.listdir(image_folder))]
initializer = Dust3rInitializer(...).to(args.device) # see instantsplat/initializer/dust3r/dust3r.py for full options
initialized_point_cloud, initialized_cameras = initializer(image_path_list=image_path_list)Create camera dataset from initialized cameras:
from instantsplat.initializer import TrainableInitializedCameraDataset
dataset = TrainableInitializedCameraDataset(initialized_cameras).to(device)Initialize 3DGS from initialized coarse point cloud:
gaussians.create_from_pcd(initialized_point_cloud.points, initialized_point_cloud.colors)Trainer jointly optimize the 3DGS parameters and cameras, without densification
from instantsplat.trainer import Trainer
trainer = Trainer(
gaussians,
scene_extent=dataset.scene_extent(),
dataset=dataset,
... # see instantsplat/trainer/trainer.py for full options
)example.mp4
- Confidence-aware Point Cloud Downsampling
- Support 2D-GS
- Support Mip-Splatting
This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!
If you find our work useful in your research, please consider giving a star ⭐ and citing the following paper 📝.
@misc{fan2024instantsplat,
title={InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds},
author={Zhiwen Fan and Wenyan Cong and Kairun Wen and Kevin Wang and Jian Zhang and Xinghao Ding and Danfei Xu and Boris Ivanovic and Marco Pavone and Georgios Pavlakos and Zhangyang Wang and Yue Wang},
year={2024},
eprint={2403.20309},
archivePrefix={arXiv},
primaryClass={cs.CV}
}