Skip to content
/ OmniMRI Public

Official Pytorch implementation of "OmniMRI: A Unified Vision–Language Foundation Model for Generalist MRI Interpretation"

License

Notifications You must be signed in to change notification settings

I3Tlab/OmniMRI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

OmniMRI

Introduction

OmniMRI is a vision–language foundation model that unifies the full MRI workflow, including reconstruction, segmentation, detection, diagnosis, and report generation, within a single multimodal architecture. Trained on 60 public datasets with 220,000+ MRI volumes and 19M slices, OmniMRI integrates imaging and clinical language through a multi-stage training paradigm, enabling inference across anatomies, contrasts, and tasks.

📄 Read the paper on arXiv

Getting Started

Coming soon...

Publication

@misc{he2025omnimriunifiedvisionlanguagefoundation,
      title={OmniMRI: A Unified Vision--Language Foundation Model for Generalist MRI Interpretation}, 
      author={Xingxin He and Aurora Rofena and Ruimin Feng and Haozhe Liao and Zhaoye Zhou and Albert Jang and Fang Liu},
      year={2025},
      eprint={2508.17524},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2508.17524}, 
}

Contacts

Intelligent Imaging Innovation and Translation Lab [github] at the Athinoula A. Martinos Center of Massachusetts General Hospital and Harvard Medical School

149 13th Street, Suite 2301 Charlestown, Massachusetts 02129, USA

For specific code requests, please contact the authors.

About

Official Pytorch implementation of "OmniMRI: A Unified Vision–Language Foundation Model for Generalist MRI Interpretation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published