Spatially informed, Contrastive learning-based Integration with Graph neural networks for Multi-modal Analysis
We present SCIGMA,a deep learning framework for integrating multi modal spatial omics data. Using uncertainty-based contrastive learning that accounts for intra- and inter-modality alignment, SCIGMA can accurately align multiple modalities. SCIGMA has been evaluated on a variety of modalities and technologies, including spatial ATAC-seq, SPOTS, 10xXenium and 10xXenium Prime 5K, 10x VisiumHD, Stereo-CITE-seq, CUT&Tag seq, and spatial metabolomics.
SCIGMA has been tested on python=3.8 and package versions listed in the requirements.txt. All analyses were run on a single cluster node with a 24Gb GPU and 100Gb of RAM or a cluster node with 24 CPUs and up to 400Gb of RAM.
SCIGMA is designed to work on all operating systems in principle. SCIGMA has been tested on the following systems:
- Linux: Red Hat Enterprise Linux 9.2
- macOS: Ventura 13.4
Installation instructions for SCIGMA and required environment. Installation should take between 10-20 minutes on a standard desktop.
- Clone the repository
git clone https://github.com/YMa-lab/SCIGMA.git- Create a virtual environment (python or conda) with Python 3.8
conda create -n SCIGMA python=3.8- Activate the environment
conda activate SCIGMA- Install R packages
conda install -c conda-forge r-base=4.0.5
conda install -c conda-forge r-mclust==5.4.9- Install base python packages
pip install -r /path/to/requirements.txt- Install CUDA related packages
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 torchaudio==2.0.2+cu117 -f https://download.pytorch.org/whl/torch_stable.html- For Jupyter notebook: install ipykernel
conda install -c anaconda ipykernelpython -m ipykernel install --user --name=SCIGMAFor running SCIGMA on a dataset, refer to our tutorial: https://github.com/YMa-lab/SCIGMA/blob/main/tutorial/SCIGMA_Tutorial.ipynb