Skip to content

Releases: instadeepai/mlip

v0.1.8

06 Jan 16:40

Choose a tag to compare

This version includes the following changes:

  • Fixing bug in ASE simulation engine to allow for passing Periodic Boundary Conditions via the box config value.
  • Adapting setup line for zero shifts array to use np.zeros to guarantee correct array shape, see issue #36 for reference.

v0.1.7

12 Dec 09:33

Choose a tag to compare

This version includes the following changes:

  • Fixing issues with Periodic Boundary Conditions (PBCs) during inference.
  • Supporting PBCs passed from ase.Atoms during simulation with the ASE engine. Passing an orthorhombic box from configuration is still supported in both simulation engines, but might become discouraged in future releases.
  • Fixing a few bugs related to batched simulations occurring in cases of reallocation of neighbor lists.

v0.1.6

11 Nov 10:34

Choose a tag to compare

This patch does not modify the library code. It only fixes the incorrect instructions for GPU-compatible installation present in README, docs, and tutorials: most shells require quotes around pip installations with extras.

v0.1.5

10 Nov 16:59

Choose a tag to compare

This version includes the following changes:

  • Adding batched simulations feature for MD simulations and energy minimizations with the JAX-MD backend.
  • Removing now useless stress_virial prediction.
  • Fixing correctness of stress and 0K pressure predictions. In 0.1.4, the stress computation actually involved a derivative with respect to cell but with fixed positions. Now, the strain also acts on positions within the unit cell, thus deforming the material homogeneously. This rigorously translation-invariant stress exempts from any Virial term correction of cell boundary effects. See for instance Thompson, Plimpton and Mattson 2009, eq (2).
  • Migrating from poetry to uv for dependency and package management.
  • Improving inefficient logging strategy in ASE simulation backend.
  • Clarifying in the documentation that we recommend a smaller value for the timestep when running energy minimizations with the JAX-MD simulation backend.
  • Removing need for separate install command for JAX-MD dependency.
  • Adding easier install method for GPU-compatible JAX.

v0.1.4

09 Oct 17:17

Choose a tag to compare

This version includes the following changes:

  • Removing constraints on some dependencies, such as numpy, jax, and flax. The mlip library now allows for more flexibility in dependency versions for downstream projects. This includes support for the newest jax versions 0.6.x and 0.7.x.
  • Fixing simulation tutorial notebook by pinning versions of visualization helper libraries.
  • Adding the option to pass the dataset_info of a trained model to GraphDatasetBuilder, which is important for downstream tasks. Failure to do so might lead to silent inconsistencies in the mapping from atomic numbers to specie indices, especially when the downstream data has fewer elements than the training set (see e.g. the fine-tuning tutorial).
  • Fixing the stress predictions, with new formulas for the virial stress and 0 Kelvin pressure term. These features should still be seen as beta for now as we proceed to test them further (see docstrings for more details).

v0.1.3

14 Aug 17:01

Choose a tag to compare

This version includes the following changes:

  • Adding two new options to our MACE implementation (see MaceConfig, these features
    should be considered in beta state for now):

    • gate_nodes: bool to apply a scalar node gating after the power expansion
      layer,
    • species_embedding_dim: int | None to optionally encode pairwise node
      species of edges in the convolution block.

    Making use of these options may improve
    inference speed at similar accuracy.

  • Fixing a bug where stress predictions would override energy and force predictions
    to None when predict_stress = True. Note that stress computations
    should not be considered reliable for now, and will be fixed in an upcoming
    release.

v0.1.2

04 Jul 16:11

Choose a tag to compare

This version includes the following changes:

  • Fixing the computation of metrics during training, by reweighting the metrics of
    each batch to account for a varying number of real graphs per batch; this results
    in the metrics being independent of the batching strategy and number of GPUs employed
  • In addition to the point above, fixing the computation of RMSE metrics by now
    only computing MSE metrics in the loss and taking the square root at the very end
    when logging
  • Deleting relative and 95-percentile metrics, as they are not straightforward to
    compute on-the-fly with our dynamic batching strategy; we recommend to compute them
    separately for a model checkpoint if necessary
  • Small amount of modifications to README and documentation

v0.1.1

06 Jun 16:16

Choose a tag to compare

Small patch to the initial release with the following changes:

  • Small amount of modifications to README and documentation
  • Adding link to white paper in README

v0.1.0

02 Jun 10:28

Choose a tag to compare

Initial release with the following features:

  • Implemented model architectures: MACE, NequIP and ViSNet
  • Dataset preprocessing
  • Training of MLIP models
  • Batched inference with trained MLIP models
  • MD simulations with MLIP models using JAX-MD and ASE simulation backends
  • Energy minimizations with MLIP models using the same simulation backends
  • Fine-tuning of pre-trained MLIP models (only for MACE)