Skip to content

Paper repository: "Dragonfly: a modular deep reinforcement learning library"

Notifications You must be signed in to change notification settings

jviquerat/dragonfly

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

dragonfly

logo

master badge

dragonfly is a small library for DRL, which was mainly developed at CEMEF, in the CFL group. It follows a basic level of modularity based on a simple abstract factory to make new developments quick and easy. If you end up using this library for research purpose, please consider citing one of the following papers (link and link):

Dragonfly, a modular deep reinforcement learning library
J. Viquerat, P. Garnier, A. Bateni, E. Hachem
arXiv pre-print 2505.03778, 2025
Parallel bootstrap-based on-policy deep reinforcement learning for continuous fluid flow control applications
J. Viquerat, E. Hachem
Fluids, vol. 8, iss. 7, 2023

Installation and usage

Clone this repository and install it locally:

git clone git@github.com:jviquerat/dragonfly.git
cd dragonfly
pip install -e .

Environments are expected to be available locally or present in the path. To train an agent on an environment, a .json case file is required (sample files are available for standard gym envs in dragonfly/env). Once you have written the corresponding <env_name>.json file to configure your agent, just run:

dgf --train <json_file>

To evaluate a trained agent, you will need a trained agent saved with tf format, as well as a .json case file. Then, just run:

dgf --eval -net <net_folder> -json <json_file> -steps <n_steps_control> -warmup <n_steps_warmup> <warmup_control_value>

In that case, the environment will just rely on the done signal to stop the evaluation. Alternatively, you can provide a -steps <n> option, that will override the done signal of the environment, and force its execution for n steps. Trained agents for standard gym environements are available in dragonfly/env.

CFD environments

Environment Description Illustration
turek-v0 A drag reduction problem exploiting two synthetic jets on a cylinder immersed in a flow at Re=100. gif
shkadov-v0 A control problem with multiple jets trying to damp instabilities on a falling liquid film, from the beacon library. Compatible with separability (see arxiv pre-print below). gif
rayleigh-v0 A control problem with local temperature control to kill a convection cell at Ra=1e4, from the beacon library. gif
mixing-v0 A problem where the agent must mix a scalar quantity by controlling the boundary velocities, from the beacon library. gif

Mujoco environments

Hopper-v4 Ant-v4 Swimmer-v4
gif gif gif
HalfCheetah-v4 Walker2d-v4 Humanoid-v4
gif gif gif

Gym environments

Cartpole-v0 Pendulum-v0 Acrobot-v1
gif gif gif
LunarLanderContinuous-v2 BipedalWalker-v3 MountainCar-v0
gif gif gif
CarRacing-v2 ? ?
gif

About

Paper repository: "Dragonfly: a modular deep reinforcement learning library"

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •