Fine-tuned models with TorchSim #458
dominicvarghese
started this conversation in
General
Replies: 1 comment 2 replies
-
|
Things like the selection of model heads are very model specific and the complexities of handling this is part of our push to an external model posture (see #120) such that individual model authors can expose whatever kwargs they like provided that they keep to the API contract. Which model are you trying to use here as that informs the extent to which we might be able to offer any help. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
When loading different MLIPs via calculator(), is there any change required depending on whether the model is pre-trained or fine-tuned?
In particular, if a model was fine-tuned using multi-head training, how do we ensure that the correct head (e.g., the default or fine-tuned head) is selected during inference? Is there any additional argument that needs to be specified in the calculator() call?
Thanks
Dominic
Beta Was this translation helpful? Give feedback.
All reactions