Standardize parameter typing to float | torch.Tensor when appropriate#466
Standardize parameter typing to float | torch.Tensor when appropriate#466
Conversation
There was a problem hiding this comment.
the changes here make the code less clean.
|
Will wait on a speed check from @falletta and then merge if it looks fine. I can't imagine it's significant but no real harm in checking. |
Signed-off-by: Rhys Goodall <rhys.goodall@outlook.com>
|
@CompRhys in case this gets merged today, could you apply to close #469. could add a single-step fairchem relax test but if models get moved to external repos, maybe not worth it |
…-sim into float-or-tensor
The fairchem PR I made to upstream has been sitting for a while without eyes from meta so I will add here. |
|
I see similar performance when running the I also checked the scalings of various operations with system size using my own profiling scripts, and everything looks good. So green light from my side @CompRhys |
see #325
I am unsure if step functions should allow floats because then we have overhead at the integration level. Init functions are a one off cost not a linear cost.