Skip to content

🏠 [JBHI 2024] Pytorch implementation of 'MonoLoT: Self-Supervised Monocular Depth Estimation in Low-Texture Scenes for Automatic Robotic Endoscopy'

Notifications You must be signed in to change notification settings

howardchina/MonoLoT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

27 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

[JBHI'24] MonoLoT

Official Pytorch implementation of MonoLoT: Self-Supervised Monocular Depth Estimation in Low-Texture Scenes for Automatic Robotic Endoscopy.

Qi HE, Guang Feng, Sophia Bano, Danail Stoyanov, Siyang Zuo

[IEEExplore][YouTube] [Github]

Overview

Our research has introduced an innovative approach that addresses the challenges associated with self-supervised monocular depth estimation in digestive endoscopy. We have addressed two critical aspects: the limitations of self-supervised depth estimation in low-texture scenes and the application of depth estimation in visual servoing for digestive endoscopy. Our investigation has revealed that the struggles of self-supervised depth estimation in low-texture scenes stem from inaccurate photometric reconstruction losses. To overcome this, we have introduced the point-matching loss, which refines the reprojected points. Furthermore, during the training process, data augmentation is achieved through batch image shuffle loss, significantly improving the accuracy and generalisation capability of the depth model. The combined contributions of the point matching loss and batch image shuffle loss have boosted the baseline accuracy by a minimum of 5% on both the C3VD and SimCol datasets, surpassing the generalisability of ground truth depth-supervised baselines when applied to upper-GI datasets. Moreover, the successful implementation of our robotic platform for automatic intervention in digestive endoscopy demonstrates the practical and impactful application of monocular depth estimation technology.

Performance

Installation

We tested our code on a server with Ubuntu 18.04.6, cuda 11.1, gcc 7.5.0

  1. Clone project
$ git clone https://github.com/howardchina/MonoLoT.git
$ cd MonoLoT
  1. Install environment
$ conda create --name monolot --file requirements.txt
$ conda activate monolot

Data

First, create a data/ folder inside the project path by

$ mkdir data

The data structure will be organised as follows:

$ tree data
data
β”œβ”€β”€ c3vd_v2
β”‚Β Β  β”œβ”€β”€ imgs -> <c3vd_v2_img_dir>
β”‚Β Β  └── matcher_results
β”‚Β Β      β”œβ”€β”€ test.npy
β”‚Β Β      β”œβ”€β”€ train.npy
β”‚Β Β      └── val.npy
└── simcol_complete
    β”œβ”€β”€ imgs -> <simcol_img_dir>
    └── matcher_results
        β”œβ”€β”€ test_352x352.npy
        β”œβ”€β”€ train_352x352.npy
        └── val_352x352.npy
...

Second, some image preprocessings are necessery such as undistort, filter static frames, and data split.

Take c3vd_v2 for instance, the following script should be processing:

  • undistort frames: playground\heqi\C3VD\data_preprocessing.ipynb
  • (optional) filter static frames and data split by yourself: playground\heqi\C3VD\gen_split.ipynb
  • (optional) generate matcher_results by yourself: playground\heqi\C3VD\gen_corres.ipynb

(optimal) Similar image preprocessings should be applied to simcol_complete as well, check playground\heqi\Simcol_complete.

We provide two ways for generating matching results saved in matcher_results folders.

  • (recommend) download matcher_results of c3vd_v2 and simcol_complete from here
  • (not recommend) generate the matcher_results by yourself using notebooks mentioned above such as playground\heqi\C3VD\gen_corres.ipynb. As this process will take about 2-4 hours.

Soft link (->) the well-prepared image folders to this workspace.

The image folder of c3vd_v2 (<c3vd_v2_img_dir>) will be organised as follows:

$ cd c3vd_v2
$ tree imgs
imgs
β”œβ”€β”€ cecum_t1_a_under_review
β”‚Β Β  β”œβ”€β”€ 0000_color.png
β”‚Β Β  β”œβ”€β”€ 0000_depth.tiff
β”‚Β Β  β”œβ”€β”€ 0001_color.png
β”‚Β Β  β”œβ”€β”€ 0001_depth.tiff
β”‚Β Β  β”œβ”€β”€ 0002_color.png
β”‚Β Β  β”œβ”€β”€ 0002_depth.tiff
β”‚Β Β  β”œβ”€β”€ 0003_color.png
β”‚Β Β  β”œβ”€β”€ 0003_depth.tiff
β”‚Β Β  β”œβ”€β”€ 0004_color.png
β”‚Β Β  β”œβ”€β”€ 0004_depth.tiff
β”‚Β Β  β”œβ”€β”€ ...
β”œβ”€β”€ cecum_t1_b_under_review
β”‚Β Β  β”œβ”€β”€ ...
β”œβ”€β”€ cecum_t2_a_under_review
β”œβ”€β”€ cecum_t2_b_under_review
β”œβ”€β”€ cecum_t2_c_under_review
β”œβ”€β”€ cecum_t3_a_under_review
β”œβ”€β”€ cecum_t4_a_under_review
β”œβ”€β”€ cecum_t4_b_under_review
β”œβ”€β”€ desc_t4_a_under_review
β”œβ”€β”€ sigmoid_t1_a_under_review
β”œβ”€β”€ sigmoid_t2_a_under_review
β”œβ”€β”€ sigmoid_t3_a_under_review
β”œβ”€β”€ sigmoid_t3_b_under_review
β”œβ”€β”€ trans_t1_a_under_review
β”œβ”€β”€ trans_t1_b_under_review
β”œβ”€β”€ trans_t2_a_under_review
β”œβ”€β”€ trans_t2_b_under_review
β”œβ”€β”€ trans_t2_c_under_review
β”œβ”€β”€ trans_t3_a_under_review
β”œβ”€β”€ trans_t3_b_under_review
β”œβ”€β”€ trans_t4_a_under_review
└── trans_t4_b_under_review

The image folder of simcol_complete (<simcol_img_dir>) will be organised as follows:

$ cd ..
$ cd simcol_complete
$ tree imgs
imgs
β”œβ”€β”€ SyntheticColon_I
β”‚Β Β  β”œβ”€β”€ Test_labels
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S10
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Depth_0000.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Depth_0001.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Depth_0002.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Depth_0003.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Depth_0004.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Depth_0005.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  ...
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Depth_1200.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ FrameBuffer_0000.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ FrameBuffer_0001.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ FrameBuffer_0002.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ FrameBuffer_0003.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ FrameBuffer_0004.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ FrameBuffer_0005.png
β”‚Β Β  β”‚Β Β  β”‚Β Β  ...
β”‚Β Β  β”‚Β Β  β”‚Β Β  └── FrameBuffer_1200.png
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S15
β”‚Β Β  β”‚Β Β  └── Frames_S5
β”‚Β Β  β”œβ”€β”€ Train
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S1
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S11
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S12
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S13
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S2
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S3
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S6
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_S7
β”‚Β Β  β”‚Β Β  └── Frames_S8
β”‚Β Β  └── Val
β”‚Β Β      β”œβ”€β”€ Frames_S14
β”‚Β Β      β”œβ”€β”€ Frames_S4
β”‚Β Β      └── Frames_S9
β”œβ”€β”€ SyntheticColon_II
β”‚Β Β  β”œβ”€β”€ Test_labels
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B10
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B15
β”‚Β Β  β”‚Β Β  └── Frames_B5
β”‚Β Β  β”œβ”€β”€ Train
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B1
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B11
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B12
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B13
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B2
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B3
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B6
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Frames_B7
β”‚Β Β  β”‚Β Β  └── Frames_B8
β”‚Β Β  └── Val
β”‚Β Β      β”œβ”€β”€ Frames_B14
β”‚Β Β      β”œβ”€β”€ Frames_B4
β”‚Β Β      └── Frames_B9
└── SyntheticColon_III
    β”œβ”€β”€ Test_labels
    β”‚Β Β  β”œβ”€β”€ Frames_O1
    β”‚Β Β  β”œβ”€β”€ Frames_O2
    β”‚Β Β  └── Frames_O3
    └── Train

Public Data

For both training and inference:

  • The C3VD dataset is available in its Homepage.
  • The SimCol dataset is available in rdr.ucl.

For inference only:

  • The UpperGI dataset is available in Nutstore.
  • The EndoSLAM dataset is available in Mendeley.
  • The EndoMapper dataset is available in Synapse.

Training

To train scenes, we provide the following training scripts saved at experiments folder:

Table IV: train and test on C3VD (with training scripts saved at experiments\c3vd_v2\ folder)

Monodepth2 (baseline) D supervised_c3vd_v2_monodepth2.yml
Lite-mono (baseline) D supervised_c3vd_v2_litemono.yml
MonoViT (baseline) D supervised_c3vd_v2_monovit.yml
Monodepth2 (baseline) M baseline_c3vd_v2_monodepth2.yml
Lite-mono (baseline) M baseline_c3vd_v2_litemono.yml
MonoViT (baseline) M baseline_c3vd_v2_monovit.yml
Monodepth2 + $\mathcal{L}_{bis}^*$ + $\mathcal{L}_m$ (MonoLoT) M RCC_matching_cropalign_c3vd_v2_monodepth2.yml
Lite-mono + $\mathcal{L}_{bis}^+$ + $\mathcal{L}_m$ (MonoLoT) M RC_matching_c3vd_v2_litemono.yml
MonoViT + $\mathcal{L}_{bis}^+$ + $\mathcal{L}_m$ (MonoLoT) M RC_matching_c3vd_v2_monovit.yml

Table V: train and test on SimCol (with training scripts saved at experiments\simcol_complete\ folder)

Monodepth2 (baseline) D supervised_simcol_complete_monodepth2.yml
Lite-mono (baseline) D supervised_simcol_complete_litemono.yml
MonoViT (baseline) D supervised_simcol_complete_monovit.yml
Monodepth2 (baseline) M baseline_simcol_complete_monodepth2.yml
Lite-mono (baseline) M baseline_simcol_complete_litemono.yml
MonoViT (baseline) M baseline_simcol_complete_monovit.yml
Monodepth2 + $\mathcal{L}_{bis}^*$ + $\mathcal{L}_m$ (MonoLoT) M RCC_cropalign_matching_simcol_complete_monodepth2.yml
Lite-mono + $\mathcal{L}_{bis}^*$ + $\mathcal{L}_m$ (MonoLoT) M RCC_cropalign_matching_simcol_complete_litemono.yml
MonoViT + $\mathcal{L}_{bis}^+$ + $\mathcal{L}_m$ (MonoLoT) M RC_matching_simcol_complete_monovit.yml

Table VI: ablation study on C3VD (with training scripts saved at experiments\ablation_c3vd_v2\ folder)

$\mathcal{L}_{bis}^+$ $\mathcal{L}_{bis}^*$ $\mathcal{L}_m$
Monodepth2 baseline baseline_c3vd_v2_monodepth2.yml
βœ“ RC_baseline_c3vd_v2_monodepth2.yml
βœ“ RCC_cropalign_c3vd_v2_monodepth2.yml
βœ“ matching_c3vd_v2_monodepth2.yml
βœ“ βœ“ RC_matching_c3vd_v2_monodepth2.yml
βœ“ βœ“ RCC_matching_cropalign_c3vd_v2_monodepth2.yml
Lite-mono baseline baseline_c3vd_v2_litemono.yml
βœ“ RC_baseline_c3vd_v2_litemono.yml
βœ“ RCC_cropalign_c3vd_v2_litemono.yml
βœ“ matching_c3vd_v2_litemono.yml
βœ“ βœ“ RC_matching_c3vd_v2_litemono.yml
βœ“ βœ“ RCC_cropalign_matching_c3vd_v2_litemono.yml
MonoViT baseline baseline_c3vd_v2_monovit.yml
βœ“ RC_baseline_c3vd_v2_monovit.yml
βœ“ RCC_cropalign_c3vd_v2_monovit.yml
βœ“ matching_c3vd_v2_monovit.yml
βœ“ βœ“ RC_matching_c3vd_v2_monovit.yml
βœ“ βœ“ RCC_cropalign_matching_c3vd_v2_monovit.yml

Table VII: ablation study on SimCol (with training scripts saved at experiments\ablation_simcol_complete\ folder)

$\mathcal{L}_{bis}^+$ $\mathcal{L}_{bis}^*$ $\mathcal{L}_m$
Monodepth2 baseline baseline_simcol_complete_monodepth2.yml
βœ“ RC_simcol_complete_monodepth2.yml
βœ“ RCC_cropalign_simcol_complete_monodepth2.yml
βœ“ matching_simcol_complete_monodepth2.yml
βœ“ βœ“ RC_matching_simcol_complete_monodepth2.yml
βœ“ βœ“ RCC_cropalign_matching_simcol_complete_monodepth2.yml
Lite-mono baseline baseline_simcol_complete_litemono.yml
βœ“ RC_simcol_complete_litemono.yml
βœ“ RCC_cropalign_simcol_complete_litemono.yml
βœ“ matching_simcol_complete_litemono.yml
βœ“ βœ“ RC_matching_simcol_complete_litemono.yml
βœ“ βœ“ RCC_cropalign_matching_simcol_complete_litemono.yml
MonoViT baseline baseline_simcol_complete_monovit.yml
βœ“ RC_baseline_simcol_complete_monovit.yml
βœ“ RCC_cropalign_simcol_complete_monovit.yml
βœ“ matching_simcol_complete_monovit.yml
βœ“ βœ“ RC_matching_simcol_complete_monovit.yml
βœ“ βœ“ RCC_cropalign_matching_simcol_complete_monovit.yml

run them with

CUDA_VISIBLE_DEVICES=0 python run.py --config experiments/<file_name>.yml --cfg_params '{"model_name": "<exp_name>"}' --seed 1243 --gpu 0 --num_workers 4

The code will automatically run training.

  • Training will be recorded in results folder.
  • log file will be saved in results/<exp_name>/logs, which can be watched by tensorboard
  • checkpoints will be saved in results/<exp_name>/models

Inference

Evaluation:

  • Please refers to playground\heqi\eval_c3vd.ipynb and playground\heqi\eval_simcol_complete_align_with_paper.ipynb.

  • Existing checkpoints are avilable at Nutstore.

  • If you would like to further visualise these models, our codes for visulisation are also provide to download for reference at Nutstore.

Contact

Citation

If you find our work helpful, please consider citing:

@ARTICLE{10587075,
  author={He, Qi and Feng, Guang and Bano, Sophia and Stoyanov, Danail and Zuo, Siyang},
  journal={IEEE Journal of Biomedical and Health Informatics}, 
  title={MonoLoT: Self-Supervised Monocular Depth Estimation in Low-Texture Scenes for Automatic Robotic Endoscopy}, 
  year={2024},
  pages={1-14},
  keywords={Estimation;Endoscopes;Training;Data models;Robots;Feature extraction;Image reconstruction;Monocular depth estimation;automatic intervention;digestive endoscopy},
  doi={10.1109/JBHI.2024.3423791}}

LICENSE

This work is licensed under CC BY-NC-SA 4.0

Acknowledgement

  • We thank all authors from C3VD for presenting such an excellent work.
  • We thank all authors from SimCol3D for presenting such an excellent work.
  • We thank all authors from EndoSLAM for presenting such an excellent work.
  • We thank all authors from EndoMapper for presenting such an excellent work.

About

🏠 [JBHI 2024] Pytorch implementation of 'MonoLoT: Self-Supervised Monocular Depth Estimation in Low-Texture Scenes for Automatic Robotic Endoscopy'

Topics

Resources

Stars

Watchers

Forks