Dario Coscia , Pim de Haan , Max Welling
BLIP is a scalable, architecture-agnostic variational Bayesian framework for training or fine-tuning machine learning interatomic potentials (MLIPs) - or message passing neural networks more in general. BLIP delivers well-calibrated uncertainty estimates and minimal computational overhead for mean prediction at inference time, while integrating seamlessly with both standard and equivariant message-passing architecture
By adopting BLIP, you can:
- Easily convert existing MLIPs into Bayesian models.
- Maintain the core architecture.
- Achieve more reliable uncertainty estimates, making your models more robust and interpretable.
This repository contains the source code to reproduce the experiment, as well as a simple jupyter notebook to start playing around with BLIP wrapper.
If you're interested in applying BLIP to your favourite MLIP (or more in general GNN), we’ve created a Jupyter notebook that can be run on Google Colab. In this notebook, we walk through all the essential steps to build BLIP and give you a hands-on introduction to experimenting with our BayesianModelWrapper!
Clone the git repository, create a virtual conda environment and install the requirements.
# clone project
git clone https://github.com/dario-coscia/blip
cd blip
# create virtual environment
conda create -n venv python=3.11
conda activate venv
# install project
python -m pip install . To download the data used in the experiments run:
sh experiments/{nbody/ammonia/silica}/generate/generate.shImportant note: The data used in this study were either generated or sourced from previously cited works (see our article), to which full credits are given.
In order to run the experiments we provide two ways.
The first way is directly running the main file:
python experiments/{nbody/ammonia/silica}_main.py --{extra_arguments}The extra_arguments are found in experiments/{nbody/ammonia/silica}/args.py.
The second option is to run the shell script on a SLURM based-cluster. First, configure the sbatch file, then run:
sh shell/train_{nbody/ammonia/silica}.shYou will run the same training (seed and hyperparamters) as in the main article.
Once the model is trained it saves the logs and checkpoints into specific directories, namely logs and ckpt, followed by the experiment type (nbody/ammonia/silica). The experiments/{nbody/ammonia/silica}_main.py file can be called using run_type=test extra_argument, this will save a .pt file with the experiment test output and data.
@misc{coscia2025blipsbayesianlearnedinteratomic,
title={BLIPs: Bayesian Learned Interatomic Potentials},
author={Dario Coscia and Pim de Haan and Max Welling},
year={2025},
eprint={2508.14022},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.14022},
}