Skip to content

zak-510/disaster-classifier

Repository files navigation

Model Performance Metrics

Localization Model

  • Architecture: U-Net with encoder-decoder structure for building segmentation
  • Performance: Limited by hardware constraints during training
  • Model File: weights/best_localization.pth
  • Input/Output: 1024x1024 satellite images → building segmentation masks
  • Status: Suboptimal performance due to training limitations (see Limitations section)

Damage Classification Model

  • Architecture: CNN classifier for building damage assessment
  • Overall F1 Score (Weighted): 84.4% (validation set)
  • Test Set F1 Score (Weighted): 82.7%
  • Model File: weights/best_damage.pth
  • Input/Output: 64x64 building patches → damage class prediction

Test Set Performance (F1 Score):

  • No-damage: 92% (precision: 88%, recall: 96%)
  • Minor-damage: 44% (precision: 61%, recall: 35%)
  • Major-damage: 43% (precision: 55%, recall: 35%)
  • Destroyed: 72% (precision: 76%, recall: 68%)

Architecture

Two stages:

  1. Building Localization: Identifies building locations in satellite imagery
  2. Damage Classification: Classifies damage level for each detected building

Limitations

Localization Model Performance

The limitation of this pipeline is the localization model performance. Due to hardware constraints (insufficient GPU memory and training time), the localization model was not trained to optimal performance levels. This leads to:

  • Poor building detection leads to missed damage assessments
  • False positive detections create noise in damage predictions

Impact: The damage classifier achieves strong performance when provided with accurate building regions, but the localization bottleneck remains an issue.

Software Dependencies

Install required packages:

pip install -r requirements.txt

Core Dependencies:

  • Python 3.9+
  • PyTorch with CUDA support
  • OpenCV (cv2)
  • NumPy
  • Matplotlib
  • Shapely
  • scikit-image
  • scikit-learn
  • tqdm

Setup

1. Clone the Repository

git clone https://github.com/zak-510/disaster-classifier.git
cd disaster-classifier

2. Download the Dataset

This project uses the xBD dataset. You will need to download it to train models or run inference.

  1. Go to the official xView2 website: https://xview2.org/
  2. Download the dataset
  3. Create a Data directory in the root of the project
  4. Extract the downloaded dataset with the following structure:
Data/
├── train/
│   ├── images/
│   └── labels/
└── test/
    ├── images/
    └── labels/

3. Model Files

Pre-trained models are included in the repository in the weights/ directory:

  • weights/best_localization.pth - Building localization model
  • weights/best_damage.pth - Damage classification model

Quick Start Guide

1. Run Individual Inference

Building Localization

python inference/localization_inference.py

Output:

  • Processes 10 test images
  • Generates three-panel visualizations (original, ground truth, predictions)
  • Saves results to test_results/localization/

Damage Classification

python inference/damage_inference.py

Output:

  • Processes 10 test images with end-to-end pipeline
  • Combines localization + damage classification
  • Generates colored damage visualizations
  • Calculates comprehensive F1 metrics
  • Saves results to test_results/damage/

2. Model Evaluation

Damage Classifier Evaluation

python evaluate_damage_classifier.py

Output:

  • Evaluates on full test set (53,850 building patches)
  • Provides detailed performance metrics
  • Generates confusion matrix and classification report
  • Saves detailed results to test_results/damage_classifier_evaluation.csv

Directory Structure

xbd-pipeline/
├── README.md              # This file
├── requirements.txt       # Python dependencies
├── evaluate_damage_classifier.py  # Model evaluation script
├── Data/                  # xBD dataset (not in repository)
│   ├── train/
│   └── test/
├── weights/               # Pre-trained model files
│   ├── best_localization.pth
│   └── best_damage.pth
├── test_results/          # Generated inference outputs
│   ├── localization/
│   └── damage/
├── data_processing/       # Data preprocessing scripts
│   ├── localization_data.py
│   └── damage_data.py
├── models/                # Model architecture definitions
│   ├── __init__.py
│   ├── model.py
│   └── damage_model.py
├── training/              # Model training scripts
│   ├── train_localization.py
│   └── train_damage.py
├── inference/             # Inference pipeline scripts
│   ├── localization_inference.py
│   └── damage_inference.py
└── tests/                 # Test utilities (also used as modules)
    ├── test_localization_inference.py
    └── test_damage_inference.py

Contributors 2

  •  
  •  

Languages