diffusiongym is a library for reward adaptation of any pre-trained flow model on any data modality.
In order to install diffusiongym, execute the following command:
pip install diffusiongymdiffusiongym requires PyTorch 2.3.1, and there may be other hard dependencies. Please open an issue if installation fails through the above command.
Molecule environments depend on FlowMol, which currently needs to be installed manually:
pip install git+https://github.com/cristianpjensen/FlowMol.git@8f4c98cbe68111e4e63480b250d925b6d960d3bcSome image rewards depend on the clip package, which needs to be installed manually as well:
pip install git+https://github.com/openai/CLIP.gitDiffusion and flow models are largely agnostic to their data modality. They only require that the underlying data type supports a small set of operations. Building on this idea, diffusiongym is designed to be fully modular. You only need to provide the following:
- Data type
YourDataTypethat implementsDDProtocol, which defines some functions necessary for interacting with it as a flow model. - Base model
BaseModel[YourDataType], which defines the scheduler, how to sample$p_0$ , how to compute the forward pass, and how to preprocess and postprocess data. - Reward function
Reward[YourDataType].
Once these are defined, you can sample from the flow model and apply reward adaptation methods, such as Value Matching.
Much more information can be found in the documentation, including tutorials and API references.
If this library is useful to you, consider citing the following work:
@inproceedings{jensen2026value,
title={Value Matching: Scalable and Gradient-Free Reward-Guided Flow Adaptation},
author={Cristian Perez Jensen and Luca Schaufelberger and Riccardo De Santi and Kjell Jorner and Andreas Krause},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
}
