- Create a virtual environment via
conda.conda create -n our-nerf python=3.9 conda activate our-nerf - Install
torchandtorchvision.conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch - Install requirements.
pip install -r requirements.txt
-
We evaluate our model on KITTI-360. You can download it from here and modify the roots in
./configs/our-nerf.yamlaccordingly. Here we show the structure of a test dataset as follow.├── KITTI-360 ├── 2013_05_28_drive_0000_sync ├── image_00 ├── image_01 ├── calibration ├── calib_cam_to_pose.txt ├── perspective.txt ├── data_2d_semantics ├── train ├── 2013_05_28_drive_0000_sync ├── image_00 ├── instance ├── data_3d_bboxes ├── data_poses ├── 2013_05_28_drive_0000_sync ├── cam0_to_world.txt ├── poses.txtfile Intro image_00/01stereo RGB images data_posessystem poses in a global Euclidean coordinate calibrationextrinsics and intrinsics of the perspective cameras instanceinstance label in single-channel 16-bit PNG format. Each pixel value denotes the corresponding instanceID.
- We provide the training code. Use the following command to train your own model and show a novel view apperance of the scene and object branches. Every 1000 iterations will cost about 1.5 min on a single NVIDIA GeForce RTX™ 3090.
python our-nerf.py --cfg_file configs/our-nerf.yaml
- Use the following command to visualize novel view appearance after scene editing.
Or you can turn the cfg
python our-nerf.py --cfg_file configs/our-nerf.yaml is_editing Trueis_editingintoTruein the config file./configs/our-nerf.yamland use the following command to finish the task.python our-nerf.py --cfg_file configs/our-nerf.yaml
Copyright © 2022, Zhejiang University. All rights reserved. We favor any positive inquiry, please contact jacey.huang@zju.edu.cn.