Full Documentation | Changelog
Malet (short for Machine Learning Experiment Tool) is a tool for hyperparameter grid searches, metric logging, advanced analyses and visualizations.
This is a pet project I'm developing for personal research, and it is yet unstable. Kinda having fun with its architecture, cosmetics, and documentations. Don't really know where this is going:)
- π Easy & powerful hyperparameter grid search syntax
- π Experiment metric logging and resuming system
- π Flexible data process and visualization tools
- π Search parallelization for multi-gpus
pip install maletFrom source:
pip install git+https://github.com/dongyeoplee2/Malet.gitFor development (uses uv):
uv syncUsing Malet starts with making a folder with a single yaml config file.
Various files resulting from some experiment is saved in this single folder.
We advise to create a folder for each experiment under experiments folder.
experiments/
βββ {experiment folder}/
βββ exp_config.yaml : experiment config yaml file (User created)
βββ log.tsv : log file for saving experiment results (generated by malet.experiment)
βββ (log_splits) : folder for splitted logs (generated by malet.experiment)
βββ figure : folder for figures (generated by malet.plot)
Say you have some training pipeline that takes in a configuration (any object w/ dictionary-like interface). We require you to return the result of the training so it gets logged.
def train(config, ...):
...
# training happens here
...
metric_dict = {
'train_accuracies': train_accuracies,
'val_accuracies': val_accuracies,
'train_losses': train_losses,
'val_losses': val_losses,
}
return metric_dictYou can configure as you would do in the yaml file.
But we provide useful special keyword grid, used as follows:
# static configs
model: LeNet5
dataset: mnist
num_epochs: 100
batch_size: 128
optimizer: adam
# grided fields
grid:
seed: [1, 2, 3]
lr: [0.0001, 0.001, 0.01, 0.1]
weight_decay: [0.0, 0.00005, 0.0001]Specifying list of config values under grid lets you run all possible combination (i.e. grid) of your configurations, with field least frequently changing in the order of declaration in grid.
The following will run the train_fn on grid of configs based on {exp_folder_path} and train_fn.
from functools import partial
from malet.experiment import Experiment
train_fn = partial(train, ...{other arguments besides config}..)
metric_fields = ['train_accuracies', 'val_accuracies', 'train_losses', 'val_losses']
experiment = Experiment({exp_folder_path}, train_fn, metric_fields)
experiment.run()Note that you need to partially apply your original function so that you pass in a function with only config as its argument.
The experiment log will be automatically saved in the {exp_folder_path} as log.tsv, where the static configs and the experiment log are each saved in yaml and tsv like structure respectively.
You can retrieve these data in python using ExperimentLog in malet.experiment as follows:
from malet.experiment import ExperimentLog
log = ExperimentLog.from_tsv({tsv_file})
static_configs = log.static_configs
df = log.dfExperiment logs also enable resuming to the most recently run config when a job is suddenly killed.
Running malet.plot lets you make plots based on log.tsv in the experiment folder.
malet-plot \
-exp_folder ../experiments/{exp_folder} \
-mode curve-epoch-train_accuracyThe key intuition for using this is to leave only two fields in the dataframe for the x-axis and the y-axis by
- specifying a specific value (_e.g._ther hyperparameters),
which will leave only one value for each field.
Available plot modes: curve, curve_best, bar, heatmap, scatter, scatter_heat.
For the full list of CLI arguments, plot configuration options, advanced gridding, parallel GPU training, checkpointing, and more, see the full documentation.
If you find Malet useful, please cite it as:
@software{lee2024malet,
author = {Dongyeop Lee},
title = {Malet: Machine Learning Experiment Tool},
year = {2024},
url = {https://github.com/dongyeoplee2/Malet},
}--min.png)


-flt(step_50:100)-ani(step)--min.gif)

-crf(sp-lmda_schedule)-flt(lmda!_0.0001-step_50:200)-ani(step)--min.gif)





-crf(sp-lmda_schedule)-flt(lmda!_0.0001-step_150:200)-ani(step)--min.gif)