Skip to content

USC-IGC/style_transfer_harmonization

 
 

Repository files navigation

Style Transfer Harmonization

image

Multi-site MRI data often exhibits variations in image appearance due to differences in imaging protocols, scanner hardware, and acquisition parameters. The paper presents a novel approach leveraging Generative Adversarial Networks (GANs) for MRI harmonization. By adapting the style of MRI images from one site to match the style of another, we aim to reduce inter-site variability and improve the generalizability of MRI-based models.

Table of Contents

Getting Started

To begin, proceed with cloning this repository. Within this repository, you will find both the source code and the pre-trained model, as detailed within the paper.

The pre-trained model is stored within the /expr_256/checkpoints/ directory. However, due to GitHub's file size limitations, we have included only a partial version of the model. To obtain the complete set of model files, please refer to the following external link: Complete Model Files.

After accessing the files, replace the existing files in the /expr_256/checkpoints/ directory with the newly downloaded files.

Prerequisites

Clone this repository:

git clone https://github.com/USCLoBeS/style_transfer_harmonization.git
cd style_transfer_harmonization/

The model is trained on skull-stripped images, registered to MNI152 template and resized to dimensions of 256 x 256 x 256. In order to use the model, process the data to the following requirements.

The pre_process.sh script processes the images to the following requirements. However, it requires skull-stripped images to function correctly.

Note: The script uses flirt to process the data, please make sure to have it installed.

To registered and resize image, pass the input directory containing the images and the output directory to save images as arguments.

An Illustrative example is stored in nii_MNI/

sh pre_process.sh ./nii_MNI/raw ./nii_MNI/resampled

Environment Setup

To begin using the model please install the required dependencies. Use the provided ENV.yml script to create and install the dependencies. Activate the environment once its created

conda env create -n style-harmonization --file ENV.yml
conda activate style-harmonization

Usage

You can use the pretrained model or train the model on your own data. Please refer to the following sections as per your requirements.

Pretrained Model

To use the pretrained model, run the harmonize_images.sh file with the following paths:

  • Single Reference Image
  • Input Images Directory
  • Output Directory
  • The Model checkpoints

Note: The Input and Reference images must be pre-processed prior to using the model

harmonize_images.sh demo/ref/XYZ_T1.nii.gz demo/input_nii/ demo/output/ expr_256/

Training Networks

In order to train the model, please convert the images into slices. Create an input and a validation directory and use the following commands to convert slices.

Flags Description
load_path The path to the input images
save_path The path to store output slices
# Prepare Training slices
python ./processing/convert_nii_to_slices_train.py --load_path demo/train_nii --save_path demo/train_slices

# Prepare Validation slices
python ./processing/convert_nii_to_slices_train.py --load_path demo/val_nii --save_path demo/val_slices

To train the model from scratch, run the following commands. Generated images and network checkpoints will be stored in the expr/samples and expr/checkpoints directories, respectively.

export CUDA_VISIBLE_DEVICES=3

python main.py --mode train --lambda_reg 1 -lambda_sty 1 \
            --lambda_ds 1 --lambda_cyc 100 --ds_iter 200000 --total_iters 200000 \
            --eval_every 200000 --train_img_dir demo/train_slices \
            --val_img_dir demo/val_slices --sample_every 5000 \
            --sample_dir expr_customer/samples --checkpoint_dir expr_customer/checkpoints \
            --batch_size 4

If you use this code, please cite the following papers:

  1. Liu M, Maiti P, Thomopoulos S, Zhu A, Chai Y, Kim H, Jahanshad N. Style Transfer Using Generative Adversarial Networks for Multi-Site MRI Harmonization. Med Image Comput Comput Assist Interv. 2021 Sep-Oct;12903:313-322. doi: 10.1007/978-3-030-87199-4_30. Epub 2021 Sep 21. PMID: 35647615; PMCID: PMC9137427. https://pmc.ncbi.nlm.nih.gov/articles/PMC9137427/
  2. Liu M, Zhu AH, Maiti P, Thomopoulos SI, Gadewar S, Chai Y, Kim H, Jahanshad N, Alzheimer's Disease Neuroimaging Initiative. Style transfer generative adversarial networks to harmonize multisite MRI to a single reference image to avoid overcorrection. Human Brain Mapping. 2023 Oct 1;44(14):4875-92. https://doi.org/10.1002/hbm.26422

Acknowledgements

We thank the following organizations and individuals, along with all others whose support made this study possible:

  • Images reproduced by kind permission of UK Biobank©
  • Data were provided [in part] by OASIS-1: Cross-Sectional (Principal Investigators: D. Marcus, R. Buckner, J. Csernansky, J. Morris). Funding provided by: P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.1%
  • Shell 3.9%