Composer have a brilliant dashboard (https://app.mosaicml.com/explorer/imagenet) which summarises all of their experiment results. It allows you to inspect results against some set of hyperparameters, understand some simple trends, and what methods work well together vs poorly (the library is for fast deep learning training). This is a significant innovation in open-source documentation.
The main metrics would be runtime and some measure of performance (AUC at some X,Y,Z fractions of datasets or something like that).
I've been working with Superset a lot, and I'm pretty sure their SaS offering (Preset) has a very generous free tier that would cover the use case. I'd be happy to set this up. Let me know your thoughts.
Composer have a brilliant dashboard (https://app.mosaicml.com/explorer/imagenet) which summarises all of their experiment results. It allows you to inspect results against some set of hyperparameters, understand some simple trends, and what methods work well together vs poorly (the library is for fast deep learning training). This is a significant innovation in open-source documentation.
The main metrics would be runtime and some measure of performance (AUC at some X,Y,Z fractions of datasets or something like that).
I've been working with Superset a lot, and I'm pretty sure their SaS offering (Preset) has a very generous free tier that would cover the use case. I'd be happy to set this up. Let me know your thoughts.