Add support for training trackastra with SAM2 features#61
Add support for training trackastra with SAM2 features#61anwai98 wants to merge 7 commits intoweigertlab:mainfrom
Conversation
|
Poof, some ruff linting structures are funny haha. All should be working now. Lemme know how it looks @C-Achard |
There was a problem hiding this comment.
Thanks @anwai98, had a quick look and the approach seems reasonable, if you end up requiring changes on the pretrained_feats repo happy to have a look as well.
One thing I noticed is that in train.py, if no model path is given (training from scratch), it will load the basic model from Trackastra, rather than the one from pretrained_feats, since in the inference-only version create() is called only from TrackingTransformer.from_folder and then it would likely crash due to the extra args.
Now you did mention you wanted to fine-tune only but maybe the best is to add some error handling if anyone tries to train a pretrained_feats model from scratch, since the resulting exception will likely look unclear if no guard is added.
Otherwise, I noticed some slightly misleading help strings in the CLI, perhaps have a look at the manuscript for better context on what these options do (I added comments on these with recommended defaults).
Finally, if your next step is to train a model, those previous configs may come in handy for that.
I hope this helps, I'm afraid I cannot test this extensively right now but happy to help further if anything is unclear in the review.
Best,
Cyril
| parser.add_argument( | ||
| "--pretrained_n_augs", | ||
| type=int, | ||
| default=15, |
There was a problem hiding this comment.
Note : this may be a bit high, depending on the dataset size. I'd recommend starting with much lower values and leaning on the feature disambiguation to avoid overfitting
| "--reduced_pretrained_feat_dim", | ||
| type=int, | ||
| default=None, | ||
| help="Reduce pretrained feature dimension via PCA to this size", |
There was a problem hiding this comment.
Since it does not look like you did explicitely re-implement the PCA dimred I used at some point (but that never made it into the final pipeline), I think this refers to the dim of pretrained features after a single FCL as in https://github.com/C-Achard/trackastra/blob/a238b2cadc8e3b954c4af4afeba6df8faf18be71/trackastra/model/model.py#L296.
So this should not mention PCA, but rather the dim of pretrained features you'd like to feed to the encoder (which gets concatenated to the additional region props)
Co-authored-by: Cyril Achard <cyril.achard@epfl.ch>
Co-authored-by: Cyril Achard <cyril.achard@epfl.ch>
Co-authored-by: Cyril Achard <cyril.achard@epfl.ch>
|
Hi @C-Achard, Thank you so much for the detailed feedback. I'll check them out later in the evening and come back to you! |
Hi @C-Achard,
Here's are my minimal changes to make training work with SAM2 features.
Let me know how it looks!
PS. In case it helps, here's my yaml config file to train
trackastra:yaml config