TERRA-GAN is a deep learning framework that automatically generates bare earth models from Digital Surface Models (DSMs) by removing man-made structures and occlusions. (Important note: The implemented version uses image processing to generate masks, although in practice you would use a segmentation model like Segment Anything. The system uses Generative Adversarial Networks (GANs) with partial convolutions to create realistic terrain in masked regions.
- Boundary-Aware Loss: Improves transitions between original and AI-generated terrain
- Human-Guided Fine-Tuning: Web-based annotation portal allows human feedback to refine results
- Spatially-Aware Data Handling: Prevents data leakage between adjacent terrain tiles
- Python 3.10+
- PyTorch 2.0+ (tested with 2.5.1)
- MLflow 2.8.0+
- CUDA Toolkit (for GPU acceleration)
- Additional packages in
requirements.txt
- NVIDIA GPU with 16GB+ VRAM (developed on RTX 4070Ti)
- 32GB+ RAM (64GB recommended)
- 100GB+ storage space
# Clone repository
git clone https://github.com/FKGSOFTWARE/TERRA-GAN.git
cd TERRA-GAN
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # Linux/macOS
# venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txtThe data used for this project was sourced from Ordinance Survey - Aerial Digimap. It is designed to use their data, where we download data through their portal. This is done by selecting download data > inputing the grid reference, e.g. "NJ05" > defining the data to download ("Aerial Imagery": "High Resolution 25cm", "Height and Terrain": "Digital Surface Model (2m)") > downloading and placing the zip folders in the input directory.
-
Place raw data zip files in designated directories:
- Training data:
./data/raw_data/experiment_training_input/ - Evaluation data:
./data/raw_data/experiment_human_eval_input/
- Training data:
-
Ensure the baseline model is in
./_BASELINE_MODEL/BASELINE_MODEL.pth(if using pre-trained weights)
./start_mlflow.sh [PORT] # Default port is 5000./run_experiment.shYou will be prompted for an experiment name (e.g., EXPERIMENT_00_BASELINE) and to confirm human annotation steps.
./ablation_experiment.sh- Set
training.loss_weights.boundaryto 0.5 inconfig.yaml, needs to be > 0 to be included. - Run
./run_experiment.sh - Name it
EXPERIMENT_04_ADDITION_BOUNDARY-AWARE-LOSS
Key settings in config.yaml:
training: Epochs, batch sizes, learning rates, loss weightsevaluation: Checkpoint paths, metric thresholdsmask_processing: Parameters for feature detectorsdata: Paths for raw, processed, and output data
# Calculate terrain metrics
python evaluate_terrain.py \
--original-masks path/to/original_masks \
--final-annotations path/to/human_annotations \
--output-file path/to/metrics.json
# Compare experiments statistically
python result_metrics_statistical_significance.py \
--experiments path/to/exp1_metrics.json path/to/exp2_metrics.json \
--output path/to/stats_comparison.jsonmain_pipeline.py: Orchestrates the pipeline modesmvp_gan/: Core GAN implementationsrc/models/: Generator and Discriminator modelssrc/training/: Training loopssrc/evaluation/: Metrics and evaluation
utils/: Helper modulesapi/: Portal client for human feedbackdata_splitting.py: Spatially-aware data splittingexperiment_tracking.py: MLflow integrationvisualization/: DSM colorization
- Evaluation scripts:
evaluate_terrain.py,plot_*.py