Skip to content

HealthX-Lab/CABLD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CABLD: Contrast-Agnostic Brain Landmark Detection with Consistency-Based Regularization

Health-X Lab | IMPACT Lab

Soorena Salari, Arash Harirpoush, Hassan Rivaz, Yiming Xiao

paper Overview BibTeX

🧠 Overview

main figure

Abstract: Anatomical landmark detection in medical images is essential for various clinical and research applications, including disease diagnosis and surgical planning. However, manual landmark annotation is time-consuming and requires significant expertise. Existing deep learning (DL) methods often require large amounts of well-annotated data, which are costly to acquire. In this paper, we introduce CABLD, a novel self-supervised DL framework for 3D brain landmark detection in unlabeled scans with varying contrasts by using only a single reference example. To achieve this, we employed an inter-subject landmark consistency loss with an image registration loss while introducing a 3D convolution-based contrast augmentation strategy to promote model generalization to new contrasts. Additionally, we utilize an adaptive mixed loss function to schedule the contributions of different sub-tasks for optimal outcomes. We demonstrate the proposed method with the intricate task of MRI-based 3D brain landmark detection. With comprehensive experiments on four diverse clinical and public datasets, including both T1w and T2w MRI scans at different MRI field strengths, we demonstrate that CABLD outperforms the state-of-the-art methods in terms of mean radial errors (MREs) and success detection rates (SDRs). Our framework provides a robust and accurate solution for anatomical landmark detection, reducing the need for extensively annotated datasets and generalizing well across different imaging contrasts.

Method

main figure

✨ Key Features

  1. One-Shot Contrast-Agnostic Landmark Detection: CABLD detects 3D brain landmarks from unlabeled scans using only a single annotated template, eliminating the need for large labeled datasets.
  2. Consistency-Regularized Multi-Task Learning: Introduces dual inter-subject and subject-template consistency losses alongside a deformable registration loss to enforce anatomically landmark detection.
  3. 3D Random Convolution for Contrast Augmentation: Pioneers the use of 3D random convolution layers for contrast augmentation, enabling robust performance across unseen MRI contrasts without requiring multi-contrast training data.
  4. Clinically Validated and Robust Performance: Achieves state-of-the-art accuracy on multiple datasets and shows strong generalization to T2w scans, anatomical misalignments, and downstream disease diagnosis (PD/AD) via landmark-based features.

☑️ 3D RC for Contrast Augmentation

To improve generalization across different and unseen MRI contrasts, we use 3D random convolutions for contrast augmentation

Results

Results reported below show accuracy for three T1w MRI datasets (HCP, OASIS, and SNSX) and one T2w MRI dataset (HCP), which features an unseen contrast.

Mean Radial Error (MRE) Comparison Across Datasets (mm)

Method HCP T1w OASIS T1w SNSX T1w HCP T2w
3D SIFT 39.44 ± 31.02 39.08 ± 29.70 41.67 ± 31.84 54.90 ± 24.51
NiftyReg 4.43 ± 2.42 8.23 ± 3.29 9.61 ± 4.03 4.40 ± 2.41
ANTs (CC) 3.85 ± 2.26 4.38 ± 2.64 6.36 ± 3.28
ANTs (MI) 3.65 ± 2.29 4.15 ± 2.65 6.06 ± 3.22 3.91 ± 2.19
KeyMorph (64 KPs) 8.05 ± 4.51 8.20 ± 4.64 9.73 ± 5.35 6.00 ± 2.64
KeyMorph (128 KPs) 5.77 ± 2.91 6.41 ± 3.41 8.99 ± 4.16 8.66 ± 4.29
KeyMorph (256 KPs) 5.37 ± 3.12 6.44 ± 3.81 8.80 ± 5.22 6.41 ± 3.06
KeyMorph (512 KPs) 4.67 ± 2.47 7.15 ± 3.63 5.77 ± 3.27 5.54 ± 3.31
BrainMorph 4.11 ± 2.30 5.28 ± 3.07 13.66 ± 18.21 4.24 ± 2.43
uniGradICON 4.12 ± 2.53 4.63 ± 3.00 5.27 ± 3.53 13.44 ± 3.88
MultiGradICON 4.10 ± 2.56 4.62 ± 3.01 5.21 ± 3.40 4.31 ± 2.70
Fully Sup. 3D CNN 4.65 ± 2.40 4.53 ± 2.81 6.64 ± 3.86
CABLD 3.27 ± 2.24 3.89 ± 2.69 5.11 ± 3.19 3.99 ± 2.25

🛠 Requirements

  • Python >= 3.8
  • PyTorch >= 2.0
  • CUDA >= 11.8 (optional but recommended)
  • Dependencies: SimpleITK, MONAI, NumPy, SciPy, Pandas

Install via pip:

pip install torch torchvision SimpleITK monai numpy scipy pandas

Citation

If you find this repository useful, please consider giving a star ⭐ and citation:

@article{salari2025cabldcontrastagnosticbrainlandmark,
        title={CABLD: Contrast-Agnostic Brain Landmark Detection with Consistency-Based Regularization},
        author={Salari, Soorena and Harirpoush, Arash and Rivaz, Hassan and Xiao, Yiming},
        booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},  
        year={2025}  
}

✉️ Contact

For any questions, feel free to contact the corresponding author: soorena.salari@concordia.ca.

Acknowledgements

Our code builds upon the KeyMorph, BrainMorph, and AFIDs repositories. We are grateful to the authors for making their code publicly available. If you use our model or code, we kindly request that you also consider citing these foundational works.

About

[ICCV 2025] Official implementation of CABLD

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages