Skip to content

[CoRL 2025] Official PyTorch Implementation of BEVCalib: LiDAR-Camera Calibration via Geometry-Guided Bird's-Eye View Representation

License

Notifications You must be signed in to change notification settings

UCR-CISL/BEVCalib

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[CoRL 2025] BEVCalib: LiDAR-Camera Calibration via Geometry-Guided Bird's-Eye View Representation

arXiv Website HF Models PyTorch License


Getting Started

Prerequistes

First create a conda environment:

conda env create -n bevcalib python=3.11
conda activate bevcalib
pip3 install -r requirements.txt

The code is built with following libraries:

We recommend using the following command to install cuda-toolkit=11.8:

conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit

After installing the above dependencies, please run the following command to install bev_pool operation

cd ./kitti-bev-calib/img_branch/bev_pool && python setup.py build_ext --inplace

We also provide a Dockerfile for easy setup, please execute the following command to build the docker image and install cuda extensions:

docker build -f Dockerfile/Dockerfile -t bevcalib .
docker run --gpus all -it -v$(pwd):/workspace bevcalib
### In the docker, run the following command to install cuda extensions
cd ./kitti-bev-calib/img_branch/bev_pool && python setup.py build_ext --inplace

Dataset Preparation

KITTI-Odometry

We release the code to reproduce our results on the KITTI-Odometry dataset. Please download the KITTI-Odometry dataset from here. After downloading the dataset, the directory structure should look like

kitti-odometry/
├── sequences/         
│   ├── 00/            
│   │   ├── image_2/  
│   │   ├── image_3/   
│   │   ├── velodyne/
│   │   └── calib.txt 
│   ├── 01/
│   │   ├── ...
│   └── 21/
│       └── ...
└── poses/            
    ├── 00.txt        
    ├── 01.txt
    └── ...

CalibDB

Coming soon!

Pretrained Model

We release our pretrained model on the KITTI-Odometry dataset. We provide two ways to download our models.

Google cloud

Please find the pretrained model from Google Drive and place it in the ./ckpt directory. For your convenience, you can also run pip3 install gdown and run the following command to download the KITTI checkpoint in the command line.

gdown https://drive.google.com/uc\?id\=1gWO-Z4NXG2uWwsZPecjWByaZVtgJ0XNb

Hugging face

We also release our pretrained model on Hugging Face page. You should download huggingface-cli by pip install -U "huggingface_hub[cli]" and then download the pretrained model by running the following command:

huggingface-cli download cisl-hf/BEVCalib --revision kitti-bev-calib --local-dir YOUR_LOCAL_PATH

Evaluation

Please run the following command to evaluate the model:

python kitti-bev-calib/inference_kitti.py \
         --log_dir ./logs/kitti \
         --dataset_root YOUR_PATH_TO_KITTI/kitti-odemetry \
         --ckpt_path YOUR_PATH_TO_KITTI_CHECKPOINT/ckpt/ckpt.pth \
         --angle_range_deg 20.0 \
         --trans_range 1.5

Training

We provide instructions to reproduce our results on the KITTI-Ododemetry dataset. Please run:

python kitti-bev-calib/train_kitti.py --log_dir ./logs/kitti \
        --dataset_root YOUR_PATH_TO_KITTI/kitti-odemetry \
        --save_ckpt_per_epoches 40 --num_epochs 500 --label 20_1.5 --angle_range_deg 20 --trans_range 1.5 \
        --deformable 0 --bev_encoder 1 --batch_size 16 --xyz_only 1 --scheduler 1 --lr 1e-4 --step_size 80

You can change --angle_range_deg and --trans_range to train under different noise settings. You can also try to use --pretrain_ckpt to load a pretrained model for fine-tuning on your own dataset.

Acknowledgement

BEVCalib appreciates the following great open-source projects: BEVFusion, LCCNet, LSS, spconv, and Deformable Attention.

Citation

@inproceedings{bevcalib,
      title={BEVCALIB: LiDAR-Camera Calibration via Geometry-Guided Bird's-Eye View Representations}, 
      author={Weiduo Yuan and Jerry Li and Justin Yue and Divyank Shah and Konstantinos Karydis and Hang Qiu},
      booktitle={9th Annual Conference on Robot Learning},
      year={2025},
}

About

[CoRL 2025] Official PyTorch Implementation of BEVCalib: LiDAR-Camera Calibration via Geometry-Guided Bird's-Eye View Representation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •