Skip to content

morozovdd/CrossKEY

Repository files navigation

CrossKEY
A 3D Cross-modal Keypoint Descriptor for MR-US Matching and Registration

Daniil Morozov1,2 · Reuben Dorent3,4 · Nazim Haouchine2

1 Technical University of Munich,  2 Harvard Medical School, Brigham and Women's Hospital,   3 Inria,   4 Sorbonne Université, Paris Brain Institute

arxiv Open in Colab Dataset License

CrossKEY Demo

Abstract

Intraoperative registration of real-time ultrasound (iUS) to preoperative Magnetic Resonance Imaging (MRI) remains an unsolved problem due to severe modality-specific differences in appearance, resolution, and field-of-view. To address this, we propose a novel 3D cross-modal keypoint descriptor for MRI-iUS matching and registration. Our approach employs a patient-specific matching-by-synthesis approach, generating synthetic iUS volumes from preoperative MRI. This enables supervised contrastive training to learn a shared descriptor space. A probabilistic keypoint detection strategy is then employed to identify anatomically salient and modality-consistent locations. During training, a curriculum-based triplet loss with dynamic hard negative mining is used to learn descriptors that are i) robust to iUS artifacts such as speckle noise and limited coverage, and ii) rotation-invariant. At inference, the method detects keypoints in MR and real iUS images and identifies sparse matches, which are then used to perform rigid registration. Our approach is evaluated using 3D MRI-iUS pairs from the ReMIND dataset. Experiments show that our approach outperforms state-of-the-art keypoint matching methods across 11 patients, with an average precision of 69.8%. For image registration, our method achieves a competitive mean Target Registration Error of 2.39 mm on the ReMIND2Reg benchmark.

Method Overview

Getting Started

Prerequisites

  • Python >= 3.12
  • uv for dependency management
  • Linux or macOS (for SIFT3D compilation; macOS requires Homebrew)

Installation

# Clone (fast, skips large data files):
GIT_LFS_SKIP_SMUDGE=1 git clone --depth 1 https://github.com/morozovdd/CrossKEY.git
cd CrossKEY
git lfs pull

# Install dependencies and build SIFT3D:
./setup.sh
source .venv/bin/activate

Training

python example_train.py [--config configs/train_config.yaml] [--data-dir data]

On first run, SIFT descriptors and keypoint heatmaps are generated automatically. Checkpoints are saved to logs/.

Testing

python example_test.py --checkpoint path/to/checkpoint.ckpt [--data-dir data]

Data

Included Example

The repository includes one case (Case059) for testing:

data/img/
├── mr/              # T2-weighted brain MRI (.nii.gz)
├── us/              # Real intraoperative ultrasound (.nii.gz)
└── synthetic_us/    # Synthetic US generated from MR (.nii.gz)

Using Your Own Data

Place your NIfTI files (.nii.gz) in the same directory structure above. Requirements:

  • MR: 3D brain MRI (T1/T2 weighted)
  • Synthetic US: Generated from MR using an ultrasound synthesis pipeline (required for training)
  • Real US: 3D intraoperative ultrasound volume (required for testing only)

SIFT descriptors and heatmaps are generated automatically on first training run.

Configuration

Training and evaluation are configured via YAML files in configs/:

  • train_config.yaml -- model architecture, loss, optimizer, data augmentation, training schedule
  • test_config.yaml -- checkpoint path, evaluation thresholds

All parameters can also be overridden via command-line arguments. Run python example_train.py --help for details.

Project Structure

CrossKEY/
├── src/
│   ├── model/
│   │   ├── descriptor.py     # Lightning module for descriptor learning
│   │   ├── networks.py       # 3D ResNet encoder
│   │   ├── losses.py         # Triplet, InfoNCE, BCE losses
│   │   └── matcher.py        # KNN matching and evaluation
│   ├── data/
│   │   ├── datamodule.py     # Lightning DataModule
│   │   ├── dataset.py        # Training and inference datasets
│   │   └── transforms.py     # 3D rotation, crop, normalization
│   └── utils/
│       ├── sift.py           # SIFT3D wrapper
│       └── utils.py          # NIfTI I/O utilities
├── scripts/
│   ├── run_sift.py           # SIFT3D keypoint extraction
│   └── create_heatmaps.py    # Probabilistic keypoint heatmaps
├── configs/                   # YAML configuration files
├── data/img/                  # Example data (Case059)
├── example_train.py           # Training entry point
└── example_test.py            # Evaluation entry point

Citation

@ARTICLE{11474556,
  author={Morozov, Daniil and Dorent, Reuben and Haouchine, Nazim},
  journal={IEEE Transactions on Medical Imaging},
  title={A 3D Cross-modal Keypoint Descriptor for MR-US Matching and Registration},
  year={2026},
  volume={},
  number={},
  pages={1-1},
  keywords={Feeds;Speckle;Filtering;Filters;Optical noise;Circuits and systems;Communication systems;Digital images;Protocols;Spatial diversity;Cross-modality;3D Keypoint Descriptor;MRI;Ultrasound;Matching and Registration},
  doi={10.1109/TMI.2026.3680352}}

License

This project is licensed under the MIT License. See LICENSE for details.

About

A 3D Cross-modal Keypoint Descriptor for MR-US Matching and Registration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors