Implementation of MRF-Mixer: A Simulation-Based Deep Learning Framework for Accelerated and Accurate Magnetic Resonance Fingerprinting Reconstruction (Ding et al., Information 2025, 16, 218). The toolkit provides a reproducible, configuration-driven pipeline for end-to-end quantitative map reconstruction from IR-bSSFP fingerprinting data.
- Complex-valued feature mixer coupled with a lightweight multi-head U-Net (
MRFMixer). - Configurable data loader supporting SVD-compressed patches, masks, and multi-shot sampling (1-shot, 3-shot, 6-shot).
- Training, evaluation, and inference scripts with YAML-based experiment definitions.
- Mask-aware losses and metrics (MAE, RMSE) aligned with the paper.
- TensorBoard-ready logging, checkpoint management, and NIfTI export utilities for T1/T2/B0 maps.
mrf-mixer/
├── artifacts/ # Placeholder for pretrained checkpoints
├── configs/ # YAML experiment definitions (1/3/6-shot)
├── docs/SCAN_REPORT.md # Cleanup audit log
├── mrf_mixer/ # Python package with models, datasets, utilities
├── scripts/ # CLI entry points (train/evaluate/infer)
├── tests/ # Unit tests and smoke checks
└── test_data/ # Sample NIfTI patches for smoke tests
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# install the package in editable mode for CLI usage
pip install -e .Note: PyTorch installation should match your CUDA/cuDNN stack; feel free to install from https://pytorch.org/get-started/locally/ instead of the wheel listed in
requirements.txt.
Expected directory layout:
data/
1shot/
train/*.nii.gz
val/*.nii.gz
test/*.nii.gz
3shot/
...
6shot/
...
Each NIfTI patch must contain:
- Real SVD coefficients (channels
0:200) - Imaginary SVD coefficients (channels
200:400) - Quantitative maps: T1, T2, B0 (channels
-5,-4,-3) - Optional M0 (ignored) and foreground mask (channel
-1)
If you maintain custom train/val/test splits, list file paths (one per line) and point train_list, val_list, or test_list to those text files inside the configuration.
If your data lives in tensorflow_datasets, set data.backend: tfds and specify:
data.tfds_name: TFDS dataset name (e.g.mrf_6shot_org_svd)data.<split>_tfds_split: split expressions (e.g.train[:90%],train[90%:])data.parameter_indices/data.mask_index: channel locations withinlabel
Install tensorflow and tensorflow-datasets manually if you plan to use this backend:
pip install tensorflow tensorflow-datasetsThe trainer materialises TFDS examples into memory for compatibility with PyTorch’s Dataset API, so ensure the selected split fits in RAM or reduce it via data.tfds_limit.
python scripts/train.py --config configs/train_1shot.yamlKey configuration knobs (see YAML files):
model.*controls MRFMixer topology (mixer depth, channels, dropout).training.*controls optimisation (epochs, LR schedule, scaling factors).data.*controls dataset location, splits, and parallelism.
Checkpoints (best and last) are written to checkpoints/<shot>/. TensorBoard logs go under logs/.
python scripts/evaluate.py \
--config configs/train_1shot.yaml \
--checkpoint checkpoints/1shot/mrf_mixer_best.ptProduces overall masked MSE loss plus per-target MAE/RMSE.
Export T1/T2 (and optionally B0) maps as NIfTI volumes:
python scripts/infer.py \
--config configs/train_1shot.yaml \
--checkpoint checkpoints/1shot/mrf_mixer_best.pt \
--input test_data/test_1shot \
--output outputs/inferenceAdd --all-targets to also save B0 predictions. Results inherit the affine/header from the input files so they can be loaded into standard neuroimaging viewers.
- Training is mixed-precision by default (enable/disable in
training.mixed_precision). - LR scheduling uses
MultiStepLRwith milestones defined per configuration. - Metrics and losses are logged every
training.log_intervalsteps. - Synthetic data for quick debugging is available through
SyntheticMRFDataset.
- Input patches follow the same channel ordering as the original paper (IR-bSSFP, 200 SVD components per real/imag branch, followed by T1/T2/B0/M0/mask).
- Train/val/test splits are materialised on disk under
data/<shot>/with consistent naming; optional list files contain filenames relative to the split directory. - Ground-truth maps are in milliseconds (T1/T2) and Hz (B0); scaling factors in
training.target_scalesnormalise them for stable optimisation. - GPU training is available for large-scale experiments.
- Temporal SVD bases are pre-applied to the datasets; if starting from raw 1000-frame fingerprints, apply your own SVD compression before using this repository.
Please cite the original paper when using this implementation:
@article{ding2025mrfmixer,
title={MRF-Mixer: A Simulation-Based Deep Learning Framework for Accelerated and Accurate Magnetic Resonance Fingerprinting Reconstruction},
author={Ding, Tianyi and others},
journal={Information},
volume={16},
number={3},
pages={218},
year={2025},
doi={10.3390/info16030218}
}