This repository is the Torch implementations of the four models described in . The paper implements these models to solve microscopy modality transfer between light optical images to scanning electron images. We found that the diffusion models far outperformed the AdaIN style mixing and Pix2PixHD GAN implementations.
For Pix2Pix and Palette models, at least 1 NVDIA GPUs with at least 12 GB of memory.
64-bit Python 3.12 and PyTorch 2.2. See https://pytorch.org/ for PyTorch install instructions.
Python libraries can be downloaded via the requirements file pip install -r requirements.txt
AdaIn is a style mixing generation model based on an Encoder-Decoder structure. Paper| Torch Implementation
To train a Palette model, run the following command:
python -m Palette.run -p train -c ./Palette/config/lom2sem.jsonPix2PixHD is a conditional image generation model with a GAN architecture and standard MLP models. Paper | Official Implementation | Project
To train a Pix2PixGAN model, run the following command:
python -m Pix2Pix.train_p2phd --config Pix2Pix/configs/lom2sem.yamlPalette is a conditional image generation framework for diffusion models with a Unet architecture. Paper | Official Implementation
This implementation was built using this unofficial pytorch implementation: Palette-Image-to-Image-Diffusion-Models
To train a Palette model, run the following command:
python -m Palette.run -p train -c ./Palette/config/lom2sem.jsonTo evaluate the models use the eval.py file to extract the IOU, IS, FID.
python eval.py -t [ground image path] -g [generated image path]