Skip to content

Normalization in MLPicker #36

@yetinam

Description

@yetinam

Hey!

I had a look at the normalization code used by the MLPicker. It seems that you always normalize with standard deviation. However, a good bunch of SeisBench models are trained with peak normalization. We had the same problem in SeisBench a while back (seisbench/seisbench#187) and it turns out to be quite severe. This might also explain why DiTing for you works substantially better than the other models (it's trained on standard deviation).

An alternative would be to just use the annotate_batch_pre function of the model. This function automatically performs the normalization (in Pytorch) and uses the parameters from the config file that are loaded with from_pretrained.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions