Skip to content

dcgaray/fire-tgnn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Forecasting Wildfire Spread with Temporal Graphs

Eaton fire frame Eaton fire grid overlay

This project treats wildfire forecasting as a spatio-temporal graph problem. We start from calibrated satellite/airborne imagery (e.g., GOES-17/18, AVIRIS), convert each frame into a grid graph, and train temporal models to predict where fires will ignite next. Every tensor ties back to a scene and an overlay for auditability (Eaton/Palisades examples above).

Motivation and Goal

  • Wildfires threaten lives, property, air quality, and infrastructure. Early, local forecasts help planners and utilities act before conditions worsen.
  • Challenges: sparse positive pixels, multi-resolution sensors, and temporal dependencies that span minutes to hours.
  • Goal: build a reproducible scene → graph → tensor pipeline so researchers can inspect overlays, run baselines (MLP, physics-inspired, TGNN), and extend to richer models and covariates.

Dataset: FIRE-D (Radiance + Masks)

We work with the FIRE-D dataset: radiometrically/geometrically calibrated radiance GeoTIFFs plus fire/smoke masks from NASA, NOAA, KMA, and Planet (Planet radiances excluded; masks included).

  • Sensors/coverage: GOES-17/18, GK2A, AVIRIS-C, AirMSPI, MASTER, eMAS; fires include Williams Flats, Sheridan, Horsefly, Mosquito (2019, US), Uljin (2022, South Korea), and Palisades/Eaton (2025, US).
  • Licensing: NASA/NOAA/GK2A radiances and masks are open (CC0/unrestricted). Planet radiances are not redistributed; masks are. GeoTIFFs embed geolocation metadata.
  • Format: One GeoTIFF per scene; bands per file; fire/smoke labels in matching files with .fire/.smoke suffixes. Bands are resampled to the coarsest native resolution per sensor; timestamps live in L1B-style filenames.
  • Processing: Missions provide radiometric/geometric calibration; labels come from SIT-FUSE. The code ingests GeoTIFFs directly—no extra preprocessing is required.

We stream these scenes with GeoTIFFAdapter, build grid graphs with label-aware cropping, and export tensors that preserve links back to overlays for auditability.

Repository Layout

  • datasets/ – original .tgz archives (read-only).
  • artifacts/ – extracted assets (radiances/, labels/, derived metrics, manifests, figures).
  • src/
    • core/ – canonical dataclasses (Scene, Dataset, TemporalGraph, etc.).
    • adapters/ – dataset adapters (GeoTIFFAdapter).
    • graphs/ – builders + visualiser for graph snapshots.
    • data/ – PyTorch dataset helpers (satellite_graph.py, temporal.py, TorchSceneDataset).
    • experiments/ – pipeline orchestrators (TemporalGraphPipeline).
    • models/ – temporal graph models under development (sequence_tgnn.py).
    • temporal/ – probabilistic baselines (HMM utilities).
    • training/ – Torch training scripts (train_satellite.py).
    • visualization/ – reusable rendering helpers.
    • cli.py – entry point for quick visualisation and training commands.
  • satellite_viewer.py – dataset-agnostic frame/graph renderer; works for GOES-17, GOES-18, AVIRIS, and future presets.
  • scripts/goes18_viewer.py – legacy GOES-18-only viewer retained for historical reference.
  • documents/ – proposal, notes, and literature context.
  • requirements.txt – environment dependencies (Jupyter + raster stack + Torch).

Quickstart

  1. Environment + data

    python3 -m venv venv
    source venv/bin/activate
    pip install -r requirements.txt
    python firedata_processor.py --in-root datasets --out-root artifacts --extract --limit 200

    Place FIRE-D archives under datasets/; extracted GeoTIFFs land in artifacts/extracted/.

  2. See the data (frames + grids)

    python satellite_viewer.py \
      --preset goes18 \
      --region la_basin \
      --limit 12 \
      --frame-output-dir outputs/frames/goes18 \
      --graph-output-dir outputs/frames/goes18_graph \
      --auto-label-extent

    For AVIRIS, add --auto-label-margin 5000 to zoom to the fire strip.

  3. Build snapshots/animations via CLI

    python -m src.cli visualize \
      --preset goes18 \
      --region la_basin \
      --limit 6 \
      --frame-dir outputs/frames/cli_raw \
      --graph-frame-dir outputs/frames/cli_graph
  4. (Optional) Run the pooled MLP baseline locally

    Only if you have the full FIRE-D AVIRIS extracts and a local Torch setup; most runs happened in Colab via the notebooks:

    python -m src.cli train \
      --preset aviris \
      --region aviris_zone11 \
      --limit 200 \
      --max-epochs 5 \
      --grid-rows 12 \
      --grid-cols 24 \
      --auto-label-extent \
      --auto-label-margin 5000

    If you prefer Colab, open the root notebooks (mlp_baseline_node_mlp.ipynb, tgnn_baseline.ipynb) instead of running this locally.

Use the root notebooks for deeper analysis (overlays, metrics, threshold sweeps, TGNN demos).

Workflow Outline (scene → graph → tensors)

Spatial-GNN Architecture

  1. IngestGeoTIFFAdapter streams radiance + label rasters from artifacts/extracted/ into Scene objects.
  2. Visual QAsatellite_viewer.py (or satellite_temporal_graph_explorer.ipynb) renders radiance frames with colour-coded nodes (fire/smoke/background) so tensors match what you see.
  3. Graph constructionSatelliteGridBuilder crops to the region or label footprint, partitions into grid cells, and computes radiance statistics + label fractions. TemporalGraphPipeline materialises time-stamped snapshots for CLI runs.
  4. DatasetsSatelliteGraphDataset exposes pooled snapshot tensors; TemporalWindowDataset assembles sliding windows (scenes, timestamps, PyG-ready sequences) for temporal modelling.
  5. Models/baselines – MLP baseline (train_satellite.py / mlp_baseline_node_mlp.ipynb), physics-inspired comparator (PDE_baseline_node.ipynb), SequenceTGNN scaffold (sequence_tgnn.py, tgnn_baseline.ipynb), and HMM utilities (src/temporal/hmm.py).

Every snapshot keeps a visual companion (PNG) and metadata so researchers can audit any tensor against the underlying scene.

Explore interactively (root notebooks)

  • satellite_temporal_graph_explorer.ipynb – end-to-end tour from raw frames to pooled tensors (preset agnostic).
  • temporal_sequence_explorer.ipynb – timelines, overlays, sliding windows, PyG sequences, and HMM/TGNN demos. For AVIRIS set CFG.auto_extent_labels = ("fire",) and CFG.auto_extent_margin = 5000.0.
  • preset_region_explorer.ipynb – preset/region browser for extents and grid sanity checks.
  • firedata_viewer.ipynb – gallery overview of each dataset with footprints/timespans and first/mid/last overlays.
  • mlp_baseline_node_mlp.ipynb – pooled MLP baseline with threshold sweeps and PR/ROC curves.
  • PDE_baseline_node.ipynb / tgnn_baseline.ipynb / sequence_tgnn_goes17_demo.ipynb – physics comparator and TGNN demos mirroring poster results.
  • documents/reports/*.tex – midterm/final reports documenting methods, datasets (FIRE-D), and results.

Regression Checks

Run these quick commands after changes to confirm the pipeline still works end-to-end:

  • Frame/export sanity check:
    python satellite_viewer.py \
      --preset goes17 \
      --region goes17_pnw \
      --limit 1 \
      --frame-output-dir outputs/checks/goes17_frames \
      --graph-output-dir outputs/checks/goes17_graph
  • CLI graph build:
    MPLCONFIGDIR=./outputs/mpl \
    python -m src.cli visualize \
      --preset aviris \
      --region aviris_zone11 \
      --limit 2 \
      --grid-rows 12 \
      --grid-cols 24 \
      --auto-label-extent \
      --auto-label-margin 5000 \
      --frame-dir outputs/checks/cli_frames \
      --graph-frame-dir outputs/checks/cli_graph
  • MLP baseline smoke test (optional, local Torch):
    python -m src.cli train \
      --preset aviris \
      --region aviris_zone11 \
      --limit 32 \
      --max-epochs 1 \
      --grid-rows 12 \
      --grid-cols 24 \
      --auto-label-extent \
      --auto-label-margin 5000
  • Optional: launch temporal_sequence_explorer.ipynb to regenerate timelines, overlays, temporal windows, and the HMM/TGNN demos.

About

This Repository is meant to hold all of the work/code being done by the Untitled3.ipynb Team for our CSCI 566 Deep Learning Project with Professor Yan Liu at USC in Fall 2025

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Contributors

Languages