- Architecture: U-Net with encoder-decoder structure for building segmentation
- Performance: Limited by hardware constraints during training
- Model File:
weights/best_localization.pth - Input/Output: 1024x1024 satellite images → building segmentation masks
- Status: Suboptimal performance due to training limitations (see Limitations section)
- Architecture: CNN classifier for building damage assessment
- Overall F1 Score (Weighted): 84.4% (validation set)
- Test Set F1 Score (Weighted): 82.7%
- Model File:
weights/best_damage.pth - Input/Output: 64x64 building patches → damage class prediction
- No-damage: 92% (precision: 88%, recall: 96%)
- Minor-damage: 44% (precision: 61%, recall: 35%)
- Major-damage: 43% (precision: 55%, recall: 35%)
- Destroyed: 72% (precision: 76%, recall: 68%)
Two stages:
- Building Localization: Identifies building locations in satellite imagery
- Damage Classification: Classifies damage level for each detected building
The limitation of this pipeline is the localization model performance. Due to hardware constraints (insufficient GPU memory and training time), the localization model was not trained to optimal performance levels. This leads to:
- Poor building detection leads to missed damage assessments
- False positive detections create noise in damage predictions
Impact: The damage classifier achieves strong performance when provided with accurate building regions, but the localization bottleneck remains an issue.
Install required packages:
pip install -r requirements.txt- Python 3.9+
- PyTorch with CUDA support
- OpenCV (cv2)
- NumPy
- Matplotlib
- Shapely
- scikit-image
- scikit-learn
- tqdm
git clone https://github.com/zak-510/disaster-classifier.git
cd disaster-classifierThis project uses the xBD dataset. You will need to download it to train models or run inference.
- Go to the official xView2 website: https://xview2.org/
- Download the dataset
- Create a
Datadirectory in the root of the project - Extract the downloaded dataset with the following structure:
Data/
├── train/
│ ├── images/
│ └── labels/
└── test/
├── images/
└── labels/
Pre-trained models are included in the repository in the weights/ directory:
weights/best_localization.pth- Building localization modelweights/best_damage.pth- Damage classification model
python inference/localization_inference.pyOutput:
- Processes 10 test images
- Generates three-panel visualizations (original, ground truth, predictions)
- Saves results to
test_results/localization/
python inference/damage_inference.pyOutput:
- Processes 10 test images with end-to-end pipeline
- Combines localization + damage classification
- Generates colored damage visualizations
- Calculates comprehensive F1 metrics
- Saves results to
test_results/damage/
python evaluate_damage_classifier.pyOutput:
- Evaluates on full test set (53,850 building patches)
- Provides detailed performance metrics
- Generates confusion matrix and classification report
- Saves detailed results to
test_results/damage_classifier_evaluation.csv
xbd-pipeline/
├── README.md # This file
├── requirements.txt # Python dependencies
├── evaluate_damage_classifier.py # Model evaluation script
├── Data/ # xBD dataset (not in repository)
│ ├── train/
│ └── test/
├── weights/ # Pre-trained model files
│ ├── best_localization.pth
│ └── best_damage.pth
├── test_results/ # Generated inference outputs
│ ├── localization/
│ └── damage/
├── data_processing/ # Data preprocessing scripts
│ ├── localization_data.py
│ └── damage_data.py
├── models/ # Model architecture definitions
│ ├── __init__.py
│ ├── model.py
│ └── damage_model.py
├── training/ # Model training scripts
│ ├── train_localization.py
│ └── train_damage.py
├── inference/ # Inference pipeline scripts
│ ├── localization_inference.py
│ └── damage_inference.py
└── tests/ # Test utilities (also used as modules)
├── test_localization_inference.py
└── test_damage_inference.py