The field of MRI reconstruction has previously utilized deep learning techniques to produce high-quality MRI images from raw MRI data, reducing the amount of time to produce an MRI image. These reconstruction techniques optimize for the entire image at once, however, for medical professionals, the MRI reconstruction must have a high quality in diagnostically relevant regions. Therefore, our work introduces methods for improving region-specific MRI reconstruction for diagnostic quality. These changes ensure that the diagnostically relevant regions of interest have greater importance and quality during MRI reconstruction.
Available at the following link: https://wandb.ai/ebruda01-georgia-institute-of-technology/deep_learning_fastmri_project/table?nw=nwuserebruda01
Website | Dataset | GitHub | Publications
fastMRI is a collaborative research project from Facebook AI Research (FAIR) and NYU Langone Health to investigate the use of AI to make MRI scans faster. NYU Langone Health has released fully anonymized knee and brain MRI datasets that can be downloaded from the fastMRI dataset page. Publications associated with the fastMRI project can be found at the end of this README.
There are multiple publications describing different subcomponents of the data (e.g., brain vs. knee) and associated baselines. All of the fastMRI data can be downloaded from the fastMRI dataset page.
-
Meta Project Summary, Datasets, Baselines: fastMRI: An Open Dataset and Benchmarks for Accelerated MRI ({J. Zbontar*, F. Knoll*, A. Sriram*} et al., 2018)
-
Brain Dataset Properties: Supplemental Material of Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction ({M. Muckley*, B. Riemenschneider*} et al., 2021)
- Connect to the Georgia Tech VPN using GlobalProtect or https://vpn.gatech.edu/
- Go to https://ondemand-ice.pace.gatech.edu/pun/sys/dashboard/
- Click on Interactive Apps
- Click on Visual Studio Code
- Request a GPU (Ideally an NVIDIA H100 or H200 but any NVIDIA GPU works as well)
- Click request and then wait to connect
- If you haven’t set up conda before for your PACE account follow the steps below. If not, skip to step #2
- Go to your home directory /hice1/userId
- Follow the steps here to download miniconda3: https://www.anaconda.com/docs/getting-started/miniconda/install#macos-linux-installation
- Check that the installation worked by typing conda in the terminal
- Link to your conda environment to your scratch folder. This is important since the packages take up so much space
- Tutorial here: https://gatech.service-now.com/home?id=kb_article_view&sysparm_article=KB0041621
- Note: for the p-- mentioned, use your gt username like ebruda3
- Setup the environment for this project
- Create the conda environment by running conda env create -f dl_environment.yml
- Activate the environment
- If you haven’t done this yet, run pip install -e .
- This installs the fastmri project
- Clone the following Github repo and put it in your scratch folder (ex: ebruda3/scratch) https://github.com/Deep-Learning-Project-FastMRI/fastMRI_deep_learning
- Make sure you clone so you can do a git push later
- Sign up to download the dataset files from NYU https://datacatalog.med.nyu.edu/dataset/10389
- Download the dataset zip files by running the curl commands from the NYU instructions
- (Optional)
subsample_files.pyto subsample the training, validation, and test data depending on available storage - Put all of the files in a data folder. Ex:
ebruda3/scratch/fastmri_deep_learning/data/{train/val/test}
- cd into fastmri_deep_learning/fastmri_examples/unet/
- Activate the dl_proj_2 conda environment
- Change the knee_path in the fastmri_dirs.yaml file to be wherever your data is saved
- Ex: knee_path: "/home/hice1/ebruda3/scratch/fastMRI_deep_learning/data/"
- Start training the model by running python train_unet_demo.py
- python train_unet_demo.py --experiment_mode=benchmark --mode=train
- python train_unet_demo.py --experiment_mode=benchmark --mode=test
- python train_unet_demo.py --experiment_mode=benchmark --mode=val
- python train_unet_demo.py --experiment_mode=manual --mode=train
- python train_unet_demo.py --experiment_mode=manual --mode=test
- python train_unet_demo.py --experiment_mode=manual --mode=val
- python train_unet_demo.py --experiment_mode=heatmap --mode=train
- python train_unet_demo.py --experiment_mode=heatmap --mode=test
- python train_unet_demo.py --experiment_mode=heatmap --mode=val
- python train_unet_demo.py --experiment_mode=attention --mode=train
- python train_unet_demo.py --experiment_mode=attention --mode=test
- python train_unet_demo.py --experiment_mode=attention --mode=val
- nohup python -u train_unet_demo.py --mode "MODE" --experiment_mode "EXPERIMENT" > "LOG_FILE_NAME".log 2>&1 &
fastMRI is MIT licensed, as found in the LICENSE file.
@misc{zbontar2018fastMRI,
title={{fastMRI}: An Open Dataset and Benchmarks for Accelerated {MRI}},
author={Jure Zbontar and Florian Knoll and Anuroop Sriram and Tullie Murrell and Zhengnan Huang and Matthew J. Muckley and Aaron Defazio and Ruben Stern and Patricia Johnson and Mary Bruno and Marc Parente and Krzysztof J. Geras and Joe Katsnelson and Hersh Chandarana and Zizhao Zhang and Michal Drozdzal and Adriana Romero and Michael Rabbat and Pascal Vincent and Nafissa Yakubova and James Pinkerton and Duo Wang and Erich Owens and C. Lawrence Zitnick and Michael P. Recht and Daniel K. Sodickson and Yvonne W. Lui},
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1811.08839},
year={2018}
}