Skip to content

Official Pytorch implementation of MRM 2025 paper "Accelerating multiparametric quantitative MRI using self-supervised scan-specific implicit neural representation with model reinforcement"

License

Notifications You must be signed in to change notification settings

I3Tlab/REFINE-MORE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Accelerating multiparametric quantitative MRI using self-supervised scan-specific implicit neural representation with model reinforcement

MRM

This repository provides the official implementation of the paper: Accelerating multiparametric quantitative MRI using self-supervised scan-specific implicit neural representation with model reinforcement

Introduction

REFINE-MORE (REference-Free Implicit NEural representation with MOdel REinforcement) is a self-supervised scan-specific method for multiparametric quantitative MRI reconstruction, which integrates the implicit neural representation (INR) with MR physics-based model reinforcement. Specifically, REFINE-MORE models the underlying weighted images and multiparametric parameter maps as coordinate-based functions, parameterized by hash encodings and MLPs, providing a compact and memory-efficient representation of the entire four-dimensional (3D + parametric) data. A model reinforcement module further refines these parameter estimates by enforcing data consistency with the measured k-space data, thereby improving reconstruction accuracy and robustness.

Figure1.jpg

Project Structure

The main components of this repository are organized as follows:

REFINE-MORE/
├── recon_demo.py              # End-to-end demo script for running REFINE-MORE reconstruction on the provided example dataset.
├── model.py                   # Core implementation of the INR-based initialization and the unrolled physics reinforcement.
├── loss_function.py           # Loss functions used for training.
├── Utils.py                   # Utility functions.
├── unet/                      # UNet architecture
│   ├── unet_model.py
│   ├── unet_parts.py
│   └── pre_trained_weights/   # Pre-trained UNet weights
├── outputs_demo/              # Example outputs from running the demo (logs, trained models, and reconstructed results).

Getting Started

The hardware and software environment we tested:

  • OS: Rocky Linux release 8.10 (Green Obsidian)
  • CPU: Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
  • GPU: NVIDIA A100 80GB
  • CUDA: 12.2
  • PyTorch: 1.13.1
  • Python: 3.10.16

Installation

  1. Download and Install the appropriate version of NVIDIA driver and CUDA for your GPU.
  2. Download and install Anaconda or Miniconda.
  3. Clone this repo and cd to the project path.
git clone https://github.com/I3Tlab/REFINE-MORE.git
cd REFINE-MORE
  1. Create and activate the Conda environment:
conda create --name REFINE_MORE python=3.10.16
conda activate REFINE_MORE
  1. Install other dependencies
pip install -r requirements.txt
  1. Install the PyTorch extension of tiny-cuda-nn

Inference

We provide an example fully sampled k-space dataset of multiparametric quantitative magnetization transfer imaging, which can be found here.

To run the reconstruction demo, please use the following command:

python recon_demo.py

Reconstruction results are written to the outputs/ folder in .mat format.

Citation

If you use REFINE-MORE in your research, please cite the corresponding paper:

@article{feng2025,
author = {Feng, Ruimin and Jang, Albert and He, Xingxin and Liu, Fang},
title = {Accelerating Multiparametric Quantitative MRI Using Self-Supervised Scan-Specific Implicit Neural Representation With Model Reinforcement},
journal = {Magnetic Resonance in Medicine},
year = {2025},
volume = {early access},
number = {early access},
pages = {},
doi = {https://doi.org/10.1002/mrm.70227},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.70227}
}

Contacts

Intelligent Imaging Innovation and Translation Lab [github] at the Athinoula A. Martinos Center of Massachusetts General Hospital and Harvard Medical School

149 13th Street, Suite 2301 Charlestown, Massachusetts 02129, USA

About

Official Pytorch implementation of MRM 2025 paper "Accelerating multiparametric quantitative MRI using self-supervised scan-specific implicit neural representation with model reinforcement"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages