Thanks for the great RetinaNet examples. I'm training a model to recognize a single class (versus background) and validating on a held-out labeled dataset. I load in the new validation dataset and run:
learn.validate(data_test.valid_dl)
This returns [0.34860164, 0.774676853563362]
For a one-class approach, I'm assuming the first returned value is the Validation Loss and the 2nd is the class Average Precision. Is that correct?
Thanks for the great RetinaNet examples. I'm training a model to recognize a single class (versus background) and validating on a held-out labeled dataset. I load in the new validation dataset and run:
learn.validate(data_test.valid_dl)This returns
[0.34860164, 0.774676853563362]For a one-class approach, I'm assuming the first returned value is the Validation Loss and the 2nd is the class Average Precision. Is that correct?