Releases: instadeepai/mlip
Releases · instadeepai/mlip
v0.1.8
v0.1.7
This version includes the following changes:
- Fixing issues with Periodic Boundary Conditions (PBCs) during inference.
- Supporting PBCs passed from
ase.Atomsduring simulation with the ASE engine. Passing an orthorhombic box from configuration is still supported in both simulation engines, but might become discouraged in future releases. - Fixing a few bugs related to batched simulations occurring in cases of reallocation of neighbor lists.
v0.1.6
v0.1.5
This version includes the following changes:
- Adding batched simulations feature for MD simulations and energy minimizations with the JAX-MD backend.
- Removing now useless
stress_virialprediction. - Fixing correctness of
stressand 0Kpressurepredictions. In 0.1.4, the stress computation actually involved a derivative with respect to cell but with fixed positions. Now, the strain also acts on positions within the unit cell, thus deforming the material homogeneously. This rigorously translation-invariant stress exempts from any Virial term correction of cell boundary effects. See for instance Thompson, Plimpton and Mattson 2009, eq (2). - Migrating from poetry to uv for dependency and package management.
- Improving inefficient logging strategy in ASE simulation backend.
- Clarifying in the documentation that we recommend a smaller value for the timestep when running energy minimizations with the JAX-MD simulation backend.
- Removing need for separate install command for JAX-MD dependency.
- Adding easier install method for GPU-compatible JAX.
v0.1.4
This version includes the following changes:
- Removing constraints on some dependencies, such as numpy, jax, and flax. The mlip library now allows for more flexibility in dependency versions for downstream projects. This includes support for the newest jax versions 0.6.x and 0.7.x.
- Fixing simulation tutorial notebook by pinning versions of visualization helper libraries.
- Adding the option to pass the
dataset_infoof a trained model toGraphDatasetBuilder, which is important for downstream tasks. Failure to do so might lead to silent inconsistencies in the mapping from atomic numbers to specie indices, especially when the downstream data has fewer elements than the training set (see e.g. the fine-tuning tutorial). - Fixing the
stresspredictions, with new formulas for the virial stress and 0 Kelvin pressure term. These features should still be seen as beta for now as we proceed to test them further (see docstrings for more details).
v0.1.3
This version includes the following changes:
-
Adding two new options to our MACE implementation (see
MaceConfig, these features
should be considered in beta state for now):gate_nodes: boolto apply a scalar node gating after the power expansion
layer,species_embedding_dim: int | Noneto optionally encode pairwise node
species of edges in the convolution block.
Making use of these options may improve
inference speed at similar accuracy. -
Fixing a bug where stress predictions would override energy and force predictions
toNonewhenpredict_stress = True. Note that stress computations
should not be considered reliable for now, and will be fixed in an upcoming
release.
v0.1.2
This version includes the following changes:
- Fixing the computation of metrics during training, by reweighting the metrics of
each batch to account for a varying number of real graphs per batch; this results
in the metrics being independent of the batching strategy and number of GPUs employed - In addition to the point above, fixing the computation of RMSE metrics by now
only computing MSE metrics in the loss and taking the square root at the very end
when logging - Deleting relative and 95-percentile metrics, as they are not straightforward to
compute on-the-fly with our dynamic batching strategy; we recommend to compute them
separately for a model checkpoint if necessary - Small amount of modifications to README and documentation
v0.1.1
v0.1.0
Initial release with the following features:
- Implemented model architectures: MACE, NequIP and ViSNet
- Dataset preprocessing
- Training of MLIP models
- Batched inference with trained MLIP models
- MD simulations with MLIP models using JAX-MD and ASE simulation backends
- Energy minimizations with MLIP models using the same simulation backends
- Fine-tuning of pre-trained MLIP models (only for MACE)