This is the official released repository for the manuscript "Interpretable foundation-model-boosted multimodal learning facilitates precision medicine for neuropathic pain".
To restore the environment we used in this study, run:
conda env create -f environment.yml
fMRIPrep is a standardized and reproducible preprocessing pipeline for functional MRI (fMRI) data. This guide describes how to run fMRIPrep using containerized workflows (Apptainer / Singularity / Docker), which is strongly recommended for both local and HPC environments.
fMRIPrep depends on FreeSurfer for anatomical surface reconstruction.
-
After approval, download the
license.txtfile. -
Place the license file in a persistent location, e.g.:
~/freesurfer/license.txt -
You will later bind this file into the container.
Useful links:
Choose the container system based on your environment.
apptainer pull docker://nipreps/fmriprep:24.1.0or:
singularity pull docker://nipreps/fmriprep:24.1.0This will generate:
fmriprep_24.1.0.sif
docker pull nipreps/fmriprep:24.1.0Useful links:
fMRIPrep requires data to be organized following the Brain Imaging Data Structure (BIDS).
Example minimal structure:
dataset/
├── dataset_description.json
├── participants.tsv
├── sub-01/
│ ├── anat/
│ │ └── sub-01_T1w.nii.gz
│ └── func/
│ ├── sub-01_task-rest_bold.nii.gz
│ └── sub-01_task-rest_bold.json
Key requirements:
- Proper BIDS naming conventions
- JSON sidecars with mandatory metadata (e.g.,
RepetitionTime) - A valid
dataset_description.json
Useful links:
Example Apptainer-based script at data_preprocessing/fmriprep.sh.
- Download the ICBM152 template from here or use our demo template
atlases/icbm_avg_152_t1_tal_nlin_symmetric_VI.nii. - Run
resample_to_mni152function indata_preprocessing/helper.py.
- Download the AAL-424 atlas here or using our demo atlas
atlases/A424+2mm.nii.gz. - Run
convert_fMRIvols_to_A424function indata_preprocessing/helper.py.
The time-series data can be easily convert to the arrow format dataset using convert_to_arrow in data_preprocessing/helper.py.
Here is a demo script:
args = {
"ts_data_dir": "path_to_ts_dataset_folder",
"dataset_name": "Name",
"metadata_path": "path_to_metadata.csv",
"save_dir": "directory_to_output_dataset"
}The arrow dataset will be put under save_dir/dataset_name folder.
The pretrained weights can be found here.
Then, put it under brainlm_mae/pretrained_models folder.
Run main.py to train our model.
A quick start:
python main.py --lable_name <the column name of label of your task in your metadata.csv, e.g., response>This repository is built upon BrainLM.