MedMask is a self-supervised masking framework designed to enhance medical image detection, particularly for breast cancer detection from mammograms (BCDM). It addresses the challenge of limited annotated datasets by leveraging masked autoencoders (MAE) and vision foundation models (VFMs) within a transformer-based architecture. The framework employs a customized MAE module that masks and reconstructs multi-scale feature maps, allowing the model to learn domain-specific characteristics more effectively. Additionally, MedMask integrates an expert contrastive knowledge distillation technique, utilizing the zero-shot capabilities of VFMs to improve feature representations. By combining self-supervised learning with knowledge distillation, MedMask achieves state-of-the-art performance on publicly available mammogram datasets like INBreast and DDSM, demonstrating significant improvements in sensitivity. Its applicability extends beyond medical imaging, showcasing generalizability to natural image tasks.
Our method involves two main stages:
We utilize BiomedParse, a powerful Vision Foundation Model (VFM) introduced in Nature Methods, to generate rich, task-agnostic embeddings from biomedical images, including mammograms. Trained across nine diverse modalities, BiomedParse performs joint segmentation, detection, and recognition using its unified architecture and pre-trained checkpoints. These robust embeddings serve as the foundational input for our MedMask framework, enabling superior generalization and domain understanding even with limited annotations.
π BiomedParse Links π Paper | π₯ Repository
cd BiomedParse
conda env create -f environment.yml
conda activate biomedparse
pip install -r assets/requirements/requirements.txtconda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidiainput_dir = '/home/Drive/Datasets/BCD_INBreast/coco_uniform_ts/train2017'
embeddings_dir = '/home/Drive/Outputs/embeddings/train_inbreast'
prompts = ['malignant cancer in the breast']python generate_embedings.pyModel checkpoints are auto-loaded from HuggingFace.
After extracting embeddings, MedMask takes over. It leverages:
- A Masked Autoencoder (MAE) on multi-scale features.
- Expert-Guided Contrastive Distillation from VFM.
- A Transformer-based backbone for detection.
- Trained on datasets like INBreast, DDSM, and RSNA-BSD1K.
We evaluate MedMask on the following mammogram datasets:
| Dataset | Samples | Malignant Cases | Format |
|---|---|---|---|
| DDSM | 1324 | 573 | COCO-JSON |
| INBreast | 200 | 41 | COCO-JSON |
| RSNA-BSD1K | 1000 | 200 | COCO-JSON |
RSNA-BSD1K is a bounding box annotated subset of 1,000 mammograms from the RSNA Breast Screening Dataset, designed to support further research in breast cancer detection from mammograms (BCDM). The original RSNA dataset consists of 54,706 screening mammograms, containing 1,000 malignancies from 8,000 patients. From this, we curated RSNA-BSD1K, which includes 1,000 mammograms with 200 malignant cases, annotated at the bounding box level by two expert radiologists.
[data_root]
ββ inbreast/
ββ annotations/
ββ images/train/, val/, test/
ββ ddsm/
ββ annotations/
ββ images/train/, val/, test/
ββ rsna-bsd1k/
ββ annotations/
ββ images/train/, val/, test/-
Linux, CUDA >= 11.1, GCC >= 8.4
-
Python >= 3.8
-
torch >= 1.10.1, torchvision >= 0.11.2
-
Other requirements
pip install -r requirements.txt
cd ./models/ops
sh ./make.sh
# unit test (should see all checking is True)
python test.pyWe provide the three benchmark datasets used in our experiments:
- BCD_DDSM: The DDSM dataset consists of 1,324 mammography images, including 573 malignant cases.
- BCD_InBreast: The INBreast dataset contains 200 images from 115 patients, with 41 malignant and 159 benign cases.
- BCD_RSNA: The RSNA-BSD1K dataset is a curated subset of 1,000 mammograms from the original RSNA dataset, including 200 malignant cases with bounding box annotations.
You can download the raw data from the official websites: dataset and organize the datasets and annotations as follows:
[data_root]
ββ inbreast
ββ annotations
ββ instances_train.json
ββ instances_val.json
ββ images
ββ train
ββ val
ββ ddsm
ββ annotations
ββ instances_train.json
ββ instances_val.json
ββ images
ββ train
ββ val
ββ rsna-bsd1k
ββ annotations
ββ instances_full.json
ββ instances_val.json
ββ images
ββ train
ββ valTo use additional datasets, you can edit datasets/coco_style_dataset.py and add key-value pairs to CocoStyleDataset.img_dirs and CocoStyleDataset.anno_files .
Our method follows a three-stage training paradigm to optimize efficiency and performance. The initial stage involves standard supervised training using annotated mammograms. The second stage incorporates self-supervised masked autoencoding (MAE) to refine representations from unannotated data. The final stage introduces Expert-Guided Fine-Tuning, leveraging vision foundation models (VFMs) for enhanced feature learning and generalization.
For training on the DDSM-to-INBreast benchmark, update the files in configs/def-detr-base/ddsm/ to specify DATA_ROOT and OUTPUT_DIR, embeddings_dir then execute:
sh configs/def-detr-base/ddsm/source_only.sh # Stage 1: Baseline Object Detector Training
sh configs/def-detr-base/ddsm/cross_domain.sh # Stage 2: Masked Autoencoder TrainingMedMask achieves state-of-the-art results across mammogram datasets with significant gains in sensitivity and cross-domain generalization, particularly in the low-data regime.
We conduct all experiments with batch size 8 (for source_only stage, 8 labeled samples; for cross_domain_mae and MRT teaching stage, 8 labeled samples and 8 unlabeled samples), on 4 NVIDIA A100 GPUs.
| Dataset | Encoder Layer | Decoder Layer | R@0.3 | Weights |
|---|---|---|---|---|
| RSNA-BSD1K | 6 | 6 | 0.886 | Download |
| DDSM | 6 | 6 | 0.718 | Download |
| INBREAST | 6 | 6 | 0.888 | Download |
- BiomedParse team (Microsoft Research)
- DDSM, INBreast, and RSNA dataset contributors
- Open-source VFM and MAE research communities
