Skip to content

The resources of papers and codes in our survey paper: Generative AI Meets SAR

License

Notifications You must be signed in to change notification settings

XAI4SAR/GenAIxSAR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

GenAI Meets SAR: A List of Resources

Awesome papers

Review & Survey Papers

GenAI Technology

A Survey on Generative Modeling with Limited Data, Few Shots, and Zero Shot

Controllable Data Generation by Deep Learning: A Review

Diffusion Models: A Comprehensive Survey of Methods and Applications

Making Images Real Again: A Comprehensive Survey on Deep Image Composition

Synthetic Aperture Radar

Application of deep generative networks for SAR/ISAR: a review

A review of Generative Adversarial Networks (GANs) and its applications in a wide variety of disciplines - From Medical to Remote Sensing

Deep Learning Methods For Synthetic Aperture Radar Image Despeckling: An Overview Of Trends And Perspectives

Explainable, Physics-Aware, Trustworthy Artificial Intelligence: A paradigm shift for synthetic aperture radar

Microwave Vision and Intelligent Perception of Radar Imagery

A review and meta-analysis of Generative Adversarial Networks and their applications in remote sensing

Language-Guided Diffusion Models for Remote Sensing

Diffusion Models Meet Remote Sensing: Principles, Methods, and Perspectives

Generative Artificial Intelligence Meets Synthetic Aperture Radar: A survey

Electromagnetic Modeling

SARCASTIC v2.0—High-Performance SAR Simulation for Next-Generation ATR Systems

Ray-Tracing Simulation Techniques for Understanding High-Resolution SAR Images

Potentials and limitations of SAR image simulators – A comparative study of three simulation approaches

Imaging Simulation of Polarimetric SAR for a Comprehensive Terrain Scene Using the Mapping and Projection Algorithm

RaySAR - 3D SAR simulator: Now open source

Statistic Modeling

Numerical Simulation of SAR Image for Sea Surface

Synthetic Aperture Radar Image Statistical Modeling: Part One-Single-Pixel Statistical Models

Synthetic Aperture Radar Image Statistical Modeling: Part Two-Spatial Correlation Models and Simulation

A Facet-Based Numerical Model for Simulating SAR Altimeter Echoes From Heterogeneous Sea Ice Surfaces

Statistical Modeling of Polarimetric SAR Data: A Survey and Challenges

A Physical Analysis of Polarimetric SAR Data Statistical Models

Physics-Inspired GenAI Methods

NeRF + Radar:

Radar Fields: An Extension of Radiance Fields to SAR

DART: Implicit Doppler Tomography for Radar Novel View Synthesis

Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar

ISAR-NeRF: Neural Radiance Fields for 3-D Imaging of Space Target From Multiview ISAR Images

Circular SAR Incoherent 3D Imaging with a NeRF-Inspired Method

RaNeRF: Neural 3-D Reconstruction of Space Targets From ISAR Image Sequences

Physics Meets GenAI in computer vision:

PAC-NeRF: Physics Augmented Continuum Neural Radiance Fields for Geometry-Agnostic System Identification

Physics-Informed Guided Disentanglement in Generative Networks

Model-Based Deep Learning

PhyRecon: Physically Plausible Neural Scene Reconstruction

Physically-aware Generative Network for 3D Shape Modeling

AI-Empowered Physical Model

Dynamic ocean inverse modeling based on differentiable rendering

Differentiable Rendering for Synthetic Aperture Radar Imagery

Learning Surface Scattering Parameters From SAR Images Using Differentiable Ray Tracing

Reinforcement Learning for SAR View Angle Inversion with Differentiable SAR Renderer

Extension of Differentiable SAR Renderer for Ground Target Reconstruction From Multiview Images and Shadows

Differentiable SAR Renderer and Image-Based Target Reconstruction

Model-Based Information Extraction From SAR Images Using Deep Learning

A SAR Target Image Simulation Method With DNN Embedded to Calculate Electromagnetic Reflection

Parameter Extraction Based on Deep Neural Network for SAR Target Simulation

🔥 Remote Sensing Image Generation with Diffusion Models

Diffusion models have demonstrated significant potential in remote sensing image generation tasks, including optical and SAR imagery. Existing research methods can be broadly categorized into two approaches: the first involves fine-tuning pre-trained models, where existing diffusion models are adapted to the remote sensing domain through transfer learning with domain-specific data; the second approach relies on end-to-end training with image-text paired data, in which diffusion models are trained from scratch without leveraging general-purpose models, aiming to learn cross-modal generation capabilities directly from remote sensing imagery and corresponding textual descriptions.

1. Cross-Modal Remote Sensing Image Generation (Supporting SAR Image Synthesis)

This category of methods focuses on leveraging pretrained diffusion models (such as Stable Diffusion) as the foundation, adapting them to remote sensing image generation tasks through fine-tuning. The generation targets include optical image synthesis and cross-modal generation from optical to SAR imagery. Compared to models trained from scratch, these approaches utilize efficient fine-tuning techniques (such as LoRA or ControlNet) to quickly adapt to remote sensing data, offering greater generalizability and computational efficiency:

  • Some methods employ LoRA for fine-tuning on text-image paired datasets, enabling text-controlled optical image generation;
  • Others incorporate ControlNet, using conditional inputs such as optical images, edge maps, or semantic segmentation maps to achieve cross-modal generation (e.g., SAR images) or structured optical image synthesis;
  • Certain methods further fine-tune adapters on task-specific datasets to enhance generation accuracy for particular applications.

MMM-RS: A Multi-modal, Multi-GSD, Multi-scene Remote Sensing Dataset and Benchmark for Text-to-Image Generation1

Diffusion-Geo: A Two-Stage Controllable Text-To-Image Generative Model for Remote Sensing Scenarios

CRS-Diff: Controllable Remote Sensing Image Generation With Diffusion Model

DiffusionSat: A Generative Foundation Model for Satellite Imagery

2.Diffusion Methods Driven by Image-Text Paired Data

This category of methods is based on large-scale image-text paired datasets to directly drive the training of diffusion models, primarily enabling text-controlled optical image generation, with some approaches further supporting cross-modal generation from optical to SAR images. Compared to fine-tuning pretrained models, these methods emphasize data-driven model construction and deep fusion of textual and visual content:

  • Utilizing large-scale image-text paired datasets to train diffusion models from scratch, effectively integrating textual information with metadata or temporal embeddings to achieve highly controllable optical image generation;
  • A few methods incorporate techniques such as ControlNet to enable cross-modal generation based on text control, extending to SAR image synthesis.

Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a Global-Scale Dataset and a Foundation Model

MetaEarth: A Generative Foundation Model for Global-Scale Remote Sensing Image Generation

Datasets

Multi-view SAR Target Generation

The moving and stationary target acquisition and recognition (MSTAR) dataset

SAMPLE dataset

The Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset

飞机目标多角度SAR数据集

OpenSARShip dataset

FUSARShip dataset

SAR-to-Optical Image Translation

SEN1-2: The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion

SAR2Opt: A Comparative Analysis of GAN-Based Methods for SAR-to-Optical Image Translation

QXS-SAROPT: The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion

SEN12MS: SEN12MS – A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion

WHU-SEN-City: SAR-to-Optical Image Translation Using Supervised Cycle-Consistent Adversarial Networks

Multi-Sensor All Weather Mapping (MSAW) Dataset: SpaceNet 6: Multi-Sensor All Weather Mapping Dataset

Experiments

We provide several baseline models based on GAN for multi-view SAR target image generation under limited observation angles. The source code can be found at ./GAN

Method

The baseline models are based on ACGAN, utilizing class label $y$ and azimuth angle $\theta$ as conditional inputs. The discriminator not only differentiates whether the input image is true or false, but also predicts the class label and azimuth angle of it. Furthermore, in order to stabilize the training process, we adopt the following techniques respectively:

Getting started

Datasets

MSTAR dataset is used in the experiment. The dataset contains ten different classes of vehicles with azimuth angle ranging from 0° to 360°.

Training

To train a GAN model, run the following command:

python train.py \
   --bs 32 \
   --lrg 0.0001 \
   --lrd 0.0001 \
   --num_epochs 500 \
   --save_dir ${SAVE_PATH} \
  • lrg and lrd are the learning rate of the generator and discriminator respectively

Generating

After training stage, run the following command to generate SAR target images with given label and angle information corresponding to a 15◦ depression angle.

python generate.py

About

The resources of papers and codes in our survey paper: Generative AI Meets SAR

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages