Problem
The AI researcher currently attempts to optimize hyperparameters (MATRIX_LR, EMBEDDING_LR, WEIGHT_DECAY) by manually editing train.py line-by-line and observing the outcome. This is an inefficient use of the agent's context and token generation capability.
Proposal
Provide a generic hyperparameter sweeping harness (e.g., using Optuna or random grid search) inside the repo. The LLM can then simply write a search space JSON or execute a one-liner to spawn 20 concurrent or sequential short runs, returning the mathematically optimal hyperparameters without having to micro-manage the search itself.