This repository contains code accompanying the 3D Cine paper by Mark Wrobel.
The pipeline is divided into three main sections:
The code uses two publicly available datasets:
After downloading, place the data in the corresponding empty folders within the repository.
Note: MMWHS requires separate folders for images and segmentations.
To pre-process the data:
- Run
HVSMR_pre_processing.ipynb - Run
MMWHS_pre_processing.ipynb - Run
training_data_preprocessing.ipynb
This will prepare all training data required for model training.
Once the data is pre-processed, train the deep learning models by running the following scripts:
3D_debanding_train.py3D_respcor_train.py3D_E2E_train.py3D_seg_train.py
Make sure to update the mmwhs_number and hvsmr_number variables to reflect the number of processed datasets.
Run 3D_cine_post_processing.ipynb.
This step can be run independently of the training pipeline.
An example healthy volunteer dataset is located in the raw_data folder and is processed using trained models located in models_final/.
Output 3D Cine data and segmentations are saved as .npy arrays in the processed_data folder.
A Dockerfile is included for creating a reproducible environment.