-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
It would be very nice and useful to have a function that compares a set of models and looks for the best one in terms of bias and variance. The general gold standard is cross-validation. In case of parametric models, we can save time by using approximation metrics of cross-validation such as AIC, BIC, etc... And in case of parametric models that are nested (i.e. one being a particular simpler case of the other), comparison can be done statistically with a likelihood ration test (LRT).
quingzz
Metadata
Metadata
Assignees
Labels
No labels