SYNAPSE: Synergizing an Adapter and Finetuning for High-Fidelity EEG Synthesis from a CLIP-Aligned Encoder
SYNAPSE is an efficient two-stage framework for high-fidelity, multi-subject EEG-to-Image synthesis, which uses a pre-trained, CLIP-aligned autoencoder to condition Stable Diffusion via a lightweight adaptation module.
-
State-of-the-Art (SOTA) FID: Achieves a SOTA FID score of 46.91 in the challenging multi-subject CVPR40 setting, a nearly 2x improvement over the previous SOTA (GWIT, 80.47).
-
High Efficiency: Uses the fewest total trainable parameters (152.69M) compared to all recent baselines (DreamDiffusion, 210M; BrainVis, 195M; GWIT, 368M), enabling the entire pipeline to be trained on a single consumer GPU (RTX 3090).
-
Direct Alignment Framework: Proposes a novel hybrid autoencoder that is pre-trained to directly align EEG signals with the CLIP embedding space , eliminating the need for complex, indirect classification or separate mapping networks used in prior work.
Baseline Code : LINK
conda create --name=synapse python=3.10
conda activate synapsepip install -r requirements.txtPretrian Ecnoder: LINK
Ptretrain LDM : Multi-Subject, Subject-4
python gen_images.pyRun MAKE IS Dataset.ipynb : LINK
Run test_images.py
python test_images.pyRun test_IS.py
python test_IS.pyWe support only ddp modes now(because of stability of codes)
-
set the config, named Train_AE.jsonLINK
-
run from scratch
train_ae.py-
set the config, named Train_LDM.json LINK
-
run from scratch
train_ldm.pyNot ready yet