Skip to content

SteamedGit/ad_vs_sbi_workshop

Repository files navigation

Official Implementation of "Are gradients worth the effort? Comparing automatic differentiation and simulation-based inference for agent-based models"

This repository contains the code associated with our paper presented at the 1st Workshop on Differentiable Systems and Scientific Machine Learning @ EurIPS 2025.

Installation Instructions

We highly recommend using the uv package manager. With uv, the installation is

cd ad_vs_sbi_workshop
uv venv
uv sync

Otherwise you can use pip with Python 3.12:

cd ad_vs_sbi_workshop
pip install -r requirements.txt

Reproducing our work

Throughout this repository we make use of the Hydra configuration system. The configs/ directory contains the configuration files necessary to reproduce our hyperparameter sweeps with WandB, final training runs and evaluation pipeline. We also provide the notebooks used to generate the plots seen in the workshop paper. (If you're not using uv, replace the uv run commands with python after activating your virtual environment.)

Training

We provide the weights of the trained models in the folder trained_models/, but you can also retrain the models from scatch.

(Note: We describe the ABM as being an SIRS model in the paper. However, for legacy reasons, the same ABM is referred to as an SIR model in the code.)

AD GVI (FlowJAX). In order to train the AD GVI models use uv run -m inference.gvi_flowjax_sir with the configs found in configs/gvi/train. For example, to train the $w=1e3$ model with a budget of 100 ABM samples:

uv run -m inference.gvi_flowjax_sir --config-name budget_100

SNVLI. In order to train the SNVLI models use uv run -m inference.sbi_snvli_sir with the configs found in configs/sbi/train. For example, to train the model with a budget of 1000 ABM samples:

uv run -m inference.sbi_snvli_sir --config-name budget_1k

Evaluation

Our evaluation pipeline consists of three scripts contained in eval/:

  • generate_ground_truth.py which is run once to generate the groundtruth realisations used in the NLPD assessment.
  • generate_posterior_predictions.py which is run for each trained model in order to generate posterior predictive samples.
  • calculate_nlpd.py which is run for each set of posterior predictive samples.

For example, to evaluate the SNVLI model with a budget of 10 000. First run

uv run -m eval.generate_posterior_predictions --config-path ../configs/sbi/eval --config-name budget_10k

Next, run

uv run -m eval.calculate_nlpd --config-path ../configs/sbi/eval --config-name budget_10k

and the NLPD will be printed to the terminal.

Hyperparameter Sweeping

We used wandb with a budget of 50 runs for each of the different versions at each sampling budget. In order to rerun our sweeps, first make sure that you have logged in to wandb locally. Next, you can create any of the sweeps with uv run wandb sweep <sweep-config>. Finally, run uv run wandb agent <link provided by wandb> --budget 50.

Reproducing figures

The code to reproduce the figures found in the workshop paper can be found in the supplied notebooks.

Automatic Differentiation Validation

To visualise Automatic Differentiation Gradients against Finite Difference you can use validate_ad_sir.py in abm/. For example,

P_INFECT=0.5 P_RECOVER=0.05 P_WANE=0.025 TOTAL_POPULATION=60 INITIAL_INFECTED=10 GUMBEL_TEMP=0.05 uv run -m abm.validate_ad_sir

There are a few other configuration options that are supplied via environment variables.

Citing this work

@inproceedings{hitge_2025_ad_vs_sbi_workshop,
title={Are gradients worth the effort? Comparing automatic differentiation and simulation-based inference for agent-based models},
author={Timothy James Hitge and Arnau Quera-Bofarull and Elizaveta Semenova},
booktitle={1st Workshop on Differentiable Systems and Scientific Machine Learning @ EurIPS},
year={2025},
url={https://openreview.net/forum?id=NVvUkYNYPK}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •