-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Hey!
I had a look at the normalization code used by the MLPicker. It seems that you always normalize with standard deviation. However, a good bunch of SeisBench models are trained with peak normalization. We had the same problem in SeisBench a while back (seisbench/seisbench#187) and it turns out to be quite severe. This might also explain why DiTing for you works substantially better than the other models (it's trained on standard deviation).
An alternative would be to just use the annotate_batch_pre function of the model. This function automatically performs the normalization (in Pytorch) and uses the parameters from the config file that are loaded with from_pretrained.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working