A deep learning model that translates Synthetic Aperture Radar (SAR) images to optical RGB images using Pix2Pix (conditional GAN) with a U-Net generator.
This project implements SAR-to-Optical image translation, enabling the conversion of radar imagery into natural-looking optical images. The model is based on the Pix2Pix architecture and has been fine-tuned on the QXSLAB_SAROPT dataset.
Key Features:
- Pix2Pix with U-Net Generator (54.5M parameters)
- PatchGAN Discriminator
- Pre-trained on Sentinel-1 SAR data
- Fine-tuned on QXSLAB_SAROPT (20,000 paired images)
- Supports LoRA fine-tuning for efficient adaptation
- Streamlit web app for interactive inference
Try it now! The model is deployed on Hugging Face Spaces for quick inference - no installation required.
git clone https://github.com/yuuIind/SAR2Optical.git
cd SAR2Optical
pip install -r requirements.txtDownload pre-trained checkpoints from the Results & Model Files section.
- Place checkpoint files in the
checkpoints/directory
python inference.pyConfigure paths in config.yaml:
inference:
image_path: "path/to/your/sar_image.png"
output_path: "./output/result.jpg"
gen_checkpoint: "checkpoints/pix2pix_gen_180.pth"
device: "cuda" # or "cpu"streamlit run app.pySAR2Optical/
├── src/ # Model architecture
├── utils/ # Utilities and config
├── checkpoints/ # Model weights
├── samples/ # Sample SAR images
├── output/ # Inference outputs
├── inference.py # Run inference
├── train.py # Training script
├── app.py # Streamlit web app
├── preprocess.py # SAR preprocessing
├── finetune.ipynb # Fine-tuning notebook (Colab)
├── config.yaml # Configuration
└── requirements.txt # Dependencies
Open finetune.ipynb in Google Colab for free GPU access:
The notebook:
- Downloads the dataset and pre-trained checkpoint
- Configures training parameters
- Fine-tunes with data augmentation
- Saves checkpoints and visualizes results
QXSLAB_SAROPT Dataset (Fine-tuning):
Dataset structure:
QXSLAB_SAROPT/
├── sar_256_oc_0.2/ # SAR images (20,000)
└── opt_256_oc_0.2/ # Optical images (20,000)
Sentinel-1/2 Dataset (Pre-training):
If your SAR images are from a different source, preprocessing may improve results.
# Single image
python preprocess.py --input /path/to/sar.png --output /path/to/output.png
# Batch processing
python preprocess.py --input /path/to/sar_folder/ --output /path/to/output_folder/| Parameter | Description | Default |
|---|---|---|
--filter |
Speckle filter type: lee, frost, median, gaussian, bilateral |
lee |
--window-size |
Filter window size (odd number) | 5 |
--percentile-low |
Lower percentile for intensity clipping | 2.0 |
--percentile-high |
Upper percentile for intensity clipping | 98.0 |
--gamma |
Gamma correction (< 1 brightens) | 1.0 |
--grayscale |
Convert grayscale to RGB | False |
See PREPROCESSING_GUIDE.md for detailed documentation.
Generator (U-Net):
- 8-layer encoder-decoder with skip connections
- Input: 256x256x3 SAR image
- Output: 256x256x3 optical image
Discriminator (PatchGAN):
- 70x70 receptive field
- Classifies image patches as real/fake
Loss Function:
- Adversarial loss (BCE) + L1 reconstruction loss (λ=100)
Key parameters in config.yaml:
training:
num_epochs: 200
batch_size: 32
lr: 0.0002
lambda_L1: 100.0
model:
c_in: 3
c_out: 3
netD: "patch"
n_layers: 3| SAR Input | Generated Optical | Ground Truth |
|---|---|---|
![]() |
![]() |
![]() |
- Python 3.8+
- PyTorch 2.0+
- CUDA (optional, for GPU acceleration)
Key dependencies:
torch>=2.0.0
torchvision>=0.15.0
Pillow>=9.0.0
numpy>=1.21.0
scipy>=1.7.0
streamlit>=1.28.0
| File | Description |
|---|---|
inference.py |
Single image inference |
train.py |
Full training from scratch |
finetune.ipynb |
Fine-tuning notebook for Colab |
app.py |
Streamlit web interface |
preprocess.py |
SAR image preprocessing |
PREPROCESSING_GUIDE.md |
Detailed preprocessing documentation |
config.yaml |
All configuration options |
MIT License - see LICENSE for details.
- Pix2Pix Paper - Isola et al.
- QXS-SAROPT Dataset - Xu et al.
- Sentinel-1/2 Image Pairs Dataset
If you use this code, please cite:
@misc{sar2optical2024,
author = {yuuIind},
title = {SAR2Optical: SAR to Optical Image Translation},
year = {2024},
publisher = {GitHub},
url = {https://github.com/yuuIind/SAR2Optical}
}If you use the QXS-SAROPT dataset, please also cite:
@article{xu2021qxs,
title={QXS-SAROPT: A Benchmark Dataset for Multi-modal SAR-Optical Image Matching},
author={Xu, Yao and others},
journal={arXiv preprint arXiv:2103.08259},
year={2021}
}

