Skip to content

Official Pytorch implementation of our paper "Few-Shot Adaptation of Training-Free Foundation Model for 3D Medical Image Segmentation"

License

Apache-2.0, BSD-3-Clause licenses found

Licenses found

Apache-2.0
LICENSE
BSD-3-Clause
LICENSE_cctorch
Notifications You must be signed in to change notification settings

I3Tlab/FATE-SAM

Few-Shot Adaptation of Training-Free Foundation Model for 3D Medical Image Segmentation

Introduction

Few-shot Adaptation of Training-frEe SAM (FATE-SAM) is a versatile framework for 3D medical image segmentation that adapts the pretrained SAM2 model without fine-tuning. By leveraging a support example-based memory mechanism and volumetric consistency, FATE-SAM enables prompt-free, training-free segmentation across diverse medical datasets. This approach achieves robust performance while eliminating the need for large annotated datasets. For more details and results, please refer to our paper.

figure1.svg

Getting Started

The hardware environment we tested.

  • OS: Ubuntu 20.04.6 LTS
  • CPU: Intel Core i9-10980XE CPU @ 3.00GHz * 36
  • GPU: NVIDIA RTX A5000

Installation

  1. Download and install the appropriate version of NVIDIA driver and CUDA for your GPU.
  2. Download and install Anaconda or Miniconda.
  3. Clone this repo and cd to the project path.
git clone git@github.com:I3Tlab/FATE-SAM.git
cd FATE-SAM
  1. Create and activate the Conda environment:
conda create --name FATE_SAM python=3.10.12
conda activate FATE_SAM
  1. Install dependencies
pip install -r requirements.txt
  1. Download checkpoints

    You can either download the sam2.1_hiera_large.pt directly using this link and place it into the checkpoints/ directory, or follow the official SAM2 repository for more options.

Quick Inference (Single Volume)

Run inference with GUI

streamlit run notebooks/app.py
▶️ Watch GUI Demo Video
Demo Video

Click to watch on YouTube

Batch Inference

Run inference with Command Line

python notebooks/fate_sam_predict.py \
       --test_image_path <path-to-test-image-nii.gz-folder> \
       --support_images_path <path-to-support-images-jpg-dir> \
       --support_labels_path <path-to-support-labels-nii.gz-dir> \

Support Data

To perform inference, support images and labels are needed. We provided a few support examples which can be found in notebooks/data covering different anatomies adopted from following publicly available datasets.

Dataset Anatomy Segmentation Objects Modality
SKI10 Knee (1) femur (2) femoral cartilage (3) tibia (4) tibial cartilage MRI
BTCV Abdomen (1) spleen (2) right kidney (3) left kidney (4) gallbladder (5) esophagus (6) liver (7) stomach (8) aorta (9) inferior vena cava (10) portal vein and splenic vein (11) pancreas (12) right adrenal gland (13) left adrenal gland CT
ACDC Heart (1) left ventricle (2) right ventricle (3) myocardium Cine-MRI
MSD-Hippocampus Brain (1) anterior (2) posterior MRI
MSD-Prostate Prostate (1) peripheral zone (2) transition zone MRI

To use your own support images and labels, please save the support images as .nii format. We recommoned using 3D slicer to generat the support label and saving them as following formats and folder structures.

Please follow Data Format for more guidance.

Sample dataset directory structure (click to view details)
<dataset>/
├── Test/
│   └── 0001_img.nii.gz
│── Support/             
│   ├── 0002_img.nii.gz
│   ├── 0002_label.nii.gz
│   ├── 0003_img.nii.gz
│   │── 0003_label.nii.gz
│   └── ...
└── 

Note: Always use the suffix _img for support/test images and _label for their corresponding labels. This naming convention ensures the files are correctly matched during processing.

Output

For GUI inference, segmentation results can be output to specified path as .nii format. For commond line inference, segmentation results are output to the predictions/ folder as .nii format which can be used for further usage.

Publication

@article{he2025few,
  title={Few-Shot Adaptation of Training-Free Foundation Model for 3D Medical Image Segmentation},
  author={He, Xingxin and Hu, Yifan and Zhou, Zhaoye and Jarraya, Mohamed and Liu, Fang},
  journal={arXiv preprint arXiv:2501.09138},
  year={2025}
}

Contacts

Intelligent Imaging Innovation and Translation Lab [github] at the Athinoula A. Martinos Center of Massachusetts General Hospital and Harvard Medical School

149 13th Street, Suite 2301 Charlestown, Massachusetts 02129, USA

About

Official Pytorch implementation of our paper "Few-Shot Adaptation of Training-Free Foundation Model for 3D Medical Image Segmentation"

Resources

License

Apache-2.0, BSD-3-Clause licenses found

Licenses found

Apache-2.0
LICENSE
BSD-3-Clause
LICENSE_cctorch

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •