astronomix (formerly jf1uids) is a differentiable hydrodynamics and magnetohydrodynamics code
written in JAX with a focus on astrophysical applications. astronomix is easy to use, well-suited for
fast method development, scales to multiple GPUs and its differentiability
opens the door for gradient-based inverse modeling and sampling as well
as surrogate / solver-in-the-loop training.
- 1D, 2D and 3D hydrodynamics and magnetohydrodynamics simulations scaling to multiple GPUs
- a high-order finite difference constrained transport WENO MHD scheme following HOW-MHD by Seo & Ryu 2023 as well as the provably divergence free and provably positivity preserving finite volume approach of Pang and Wu (2024)
- for finite volume simulations the basic Lax-Friedrichs, HLL and HLLC Riemann solvers as well as the HLLC-LM (Fleischmann et al., 2020) and HYBRID-HLLC & AM-HLLC (Hu et al., 2025) (sequels to HLLC-LM) variants
- novel (possibly) conservative self gravity scheme, with improved stability at strong discontinuities (currently only available for the finite volume solver)
- spherically symmetric simulations such that mass and energy are conserved based on the scheme of Crittenden and Balachandar (2018)
- backwards and forwards differentiable with adaptive timestepping
- turbulent driving, simple stellar wind, simple radiative cooling modules
- easily extensible, all code is open source
- Installation
- Hello World! Your first astronomix simulation
- Notebooks for Getting Started
- Showcase
- Scaling tests
- Documentation
- Methodology
- Limitations
- Citing astronomix
astronomix can be installed via pip
pip install astronomixNote that if JAX is not yet installed, only the CPU version of JAX will be installed
as a dependency. For a GPU-compatible installation of JAX, please refer to the
JAX installation guide.
Below is a minimal example of a 1D hydrodynamics shock tube simulation using astronomix.
import jax.numpy as jnp
from astronomix import (
SimulationConfig, SimulationParams,
get_helper_data, finalize_config,
get_registered_variables, construct_primitive_state,
time_integration
)
# the SimulationConfig holds static
# configuration parameters
config = SimulationConfig(
box_size = 1.0,
num_cells = 101,
progress_bar = True
)
# the SimulationParams can be changed
# without causing re-compilation
params = SimulationParams(
t_end = 0.2,
)
# the variable registry allows for the principled
# access of simulation variables from the state array
registered_variables = get_registered_variables(config)
# next we set up the initial state using the helper data
helper_data = get_helper_data(config)
shock_pos = 0.5
r = helper_data.geometric_centers
rho = jnp.where(r < shock_pos, 1.0, 0.125)
u = jnp.zeros_like(r)
p = jnp.where(r < shock_pos, 1.0, 0.1)
# get initial state
initial_state = construct_primitive_state(
config = config,
registered_variables = registered_variables,
density = rho,
velocity_x = u,
gas_pressure = p,
)
# finalize and check the config
config = finalize_config(config, initial_state.shape)
# now we run the simulation
final_state = time_integration(initial_state, config, params, registered_variables)
# the final_state holds the final primitive state, the
# variables can be accessed via the registered_variables
rho_final = final_state[registered_variables.density_index]
u_final = final_state[registered_variables.velocity_index]
p_final = final_state[registered_variables.pressure_index]You've just run your first astronomix simulation! You can continue with
the notebooks below and we have also prepared a more advanced use-case
(stellar wind in driven MHD tubulence) which you can
.
- hydrodynamics
- magnetohydrodynamics
- self-gravity
- stellar wind
![]() |
|---|
| Magnetohydrodynamics simulation with driven turbulence at a resolution of 512³ cells in a fifth order CT MHD scheme run on 4 H200 GPUs. |
![]() |
|---|
| Magnetohydrodynamics simulation with driven turbulence and stellar wind at a resolution of 512³ cells in a fifth order CT MHD scheme run on 4 H200 GPUs. |
![]() |
![]() |
|---|---|
| Orszag-Tang Vortex | 3D Collapse |
| Gradients Through Stellar Wind |
| Novel (Possibly) Conservative Self Gravity Scheme, Stable at Strong Discontinuities |
![]() |
|---|
| Wind Parameter Optimization |
Our first scaling tests cover the two MHD schemes implemented in astronomix: the
2nd order finite volume (fv_mhd) scheme and the 5th order finite difference (fd_mhd) scheme.
The following results were obtained on a single NVIDIA H200 GPU, the test run at different resoltions was an MHD blast wave test (see the code).
| Runtime benchmarking of the fv_mhd and fd_mhd schemes on a single NVIDIA H200 GPU. |
The finite volume scheme is roughly an order of magnitude faster at the same resolution.
But considering the accuracy per computational cost, where we take the 512³ fd_mhd simulation as the reference, the 5th order finite difference scheme is more efficient.
| Accuracy versus computational cost for the fv_mhd and fd_mhd schemes on a single NVIDIA H200 GPU. |
The finite difference schemes achieves higher accuracy with less computation time.
We have tested the multi-GPU scaling of the 5th order finite difference MHD scheme, comparing the runtime of the same simulation on 1 and 4 NVIDIA H200 GPUs (strong scaling).
| Multi-GPU scaling of the 5th order finite difference MHD scheme on 4 NVIDIA H200 GPUs. |
We reach speedups of up to ~3.5x at 512³ resolution on 4 GPUs compared to a single GPU run. At higher resolutions we would expect to eventually reach perfect scaling. The lower speedup at 600³ cells in our test might be due to other processes running on the GPU node at the time of benchmarking as we are using shared cluster resources.
See here.
For now we will focus on methodological insights. The basics can be found in
- Van Leer (2006) for the basic 2nd order MUSCL finite volume scheme implemented
- Crittenden and Balachandar (2018) for the spherically symmetric simulations
- Pang and Wu (2024) for the 2nd order provably divergence free and provably positivity preserving finite volume MHD scheme following the research line of operator splitting the MHD system into a hydrodynamic and magnetic update step (also see Dao et al, 2024)
- Seo & Ryu (2023) for the 5th order finite difference constrained transport MHD scheme
- Tomida & Stone (2023) for the coupling of self-gravity to the hydrodynamics equations
The first step in the Pang and Wu (2024) scheme is to split the MHD system into a magnetic and hydrodynamic system. Different from their approach, here we motivate the split based on the primitive form of the MHD equations (their reasoning is generally more formal).
Let us start with the MHD equations in primitive form (see Equation Set 2. in Clarke (2015))
Here
Let us now split this system into a hydrodynamic part (which we can turn back into the conservative form)
and a magnetic part
Where for an MHD time-step we apply a second order operator splitting approach
We solve the hydrodynamical system A as before and for system B we use finite differencing and a fixed-point iteration.
On a grid in three dimensions, the curl of a field
and on a grid in two dimensions as (the field is still
The central difference divergence in 3D is given as
and in 2D as
Note that the discrete divergence of a discrete curl as defined above still vanishes. For the 3D field on the 2D grid, this follows from
$$ \begin{gathered} \vec{\nabla} \cdot (\vec{\nabla} \times \vec{A}_{i,j}) = \frac{1}{2 \Delta x}\left( \frac{A^z_{i+1,j+1} - A^z_{i+1,j-1}}{2 \Delta y} - \frac{A^z_{i-1,j+1} - A^z_{i-1,j-1}}{2 \Delta y} \right) \ + \frac{1}{2 \Delta y}\left( -\frac{A^z_{i+1,j+1} - A^z_{i-1,j+1}}{2 \Delta x} + \frac{A^z_{i+1,j-1} - A^z_{i-1,j-1}}{2 \Delta x} \right) \\ = \frac{1}{4 \Delta y \Delta x}\big(A^z_{i+1,j+1} - A^z_{i+1,j+1} + A^z_{i+1,j-1} - A^z_{i+1,j-1} \ + A^z_{i-1,j+1} - A^z_{i-1,j+1} + A^z_{i-1,j-1} - A^z_{i-1,j-1} \big) \\ = 0 \end{gathered} $$
We perform a time-step of System B given by
using an implicit midpoint method
with
We solve this implicit equation using a fixed-point iteration
We use
This iteration usually converges in a few (
For the provably positivity preserving property of the hydrodynamic update, we refer to Pang and Wu (2024).
In Pang and Wu (2024) the positivity preserving property is obtained based on using a Lax-Friedrichs Riemann flux combined with a specific limiting procedure. For our turbulence simulations we would like to use a less dissipative Riemann solver. In our 2D tests, using HLL or HLLC worked well without problems.
![]() |
|---|
| MHD jet simulated with the fv_mhd scheme. |
However, in 3D we observed the development of artificial oscillations scaling with the grid scale. These oscillations could be mitigated by using a first order implicit Euler solve for the magnetic update step instead of the implicit midpoint method.
![]() |
|---|
| The less dissipative HLL Riemann solver combined with the implicit midpoint magnetic update produces numerical oscillations. |
Consider the gravitational potential
we can approximate the acceleration on the gas in the
The simplest source-term coupling of the potential to the hydrodynamics is by an update
where
However, this scheme does not conserve energy, essentially because the mass actually transported in the field is not the bulk flux HLLC flux in the central formulation:
we can see a bulk-flux component and a gradient-based component. The higher the resolution, the more aligned will the Riemann flux and bulk flux be, becoming equal for
For a spherical collapse test (Evrard's collapse)
the results of the simple source-term scheme are presented below. The results of our (near-)energy-conserving scheme are also shown in the figure.
| Energy conservation properties of the simple source-term scheme (non-conservative) and our scheme (conservative) (described later on). |
As expected, the total energy is not conserved with the error improving at higher resolution. Note that the energy error is first order in time, so taking smaller time steps does not help (Springel, 2010).
We will now consider the source-term scheme as implemented in ATHENA (implemented here) and described in Tomida et al. (2023, Sec. 3.2.3), which draws from but is not identical to the fully conservative scheme of Mullen et al. (2021).
In addition to the central difference acceleration we also calculate approximations for accelerations at the interfaces as
and in the energy source-term use the Riemann fluxes
In my view, this scheme has the following flaws:
- The momentum and energy update become misaligned, while this might have a positive corrective effect in some cases, in others we will make unwanted changes to the internal energy.
- Consider adjacent cells
$i$ and$i+1$ , each cell receives a term$\frac{1}{2} \rho v_{i+\frac{1}{2}}^\text{Riemann} g_{i+\frac{1}{2}}$ added, these terms correspond to the Riemann flux through the interface at$x_{i+\tfrac{1}{2}}$ which brings about the improved energy conservation (the energy source term reflects the mass actually moved in the potential); however, the half-half split of the Riemann flux between adjacent cells can be problematic: consider a discontinuity where one of the adjacent cells has very little energy to begin with and the flux flows against the potential - the half-half split means that we consider half of the work done against the potential to be done by the energy-depleted cell.
I assume the second aspect to be the reason why this scheme failed for Evrard's collapse in my tests, producing negative pressures at the discontinuity.
![]() |
|---|
| Failure mode of the half-half-split scheme compared to our riemann-split scheme |
This is substantiated by the observation that our improved scheme, which replaces the half-half split, can simulate Evrard's collapse successfully.
This is very much work in progress but the main idea is to replace the half-half split.
Consider the interface between cells
Now, consider that we can write the Riemann flux as
where
For a piecewise constant reconstruction
More on this will be presented in future work.
The finite difference MHD scheme was recently implemented and currently only supports periodic boundary conditions. Currently, self-gravity, cosmic rays and cooling are also not supported with the finite difference scheme.
If you use astronomix in your research, please cite via
@misc{storcks_astronomix_2025,
doi = {10.5281/ZENODO.17782162},
url = {https://zenodo.org/doi/10.5281/zenodo.17782162},
author = {Storcks, Leonard},
title = {astronomix - differentiable MHD in JAX},
publisher = {Zenodo},
year = {2025},
copyright = {MIT License}
}There is also a workshop paper on an earlier stage of the project:







