From 00bf623d28b38d130b5e2c86441a21a1dde2a256 Mon Sep 17 00:00:00 2001 From: Sunny Date: Tue, 10 Mar 2026 13:00:55 +0000 Subject: [PATCH] docs: fix grammar in README (a three files -> three files) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 6f211947..2bc30516 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ The idea: give an AI agent a small but real LLM training setup and let it experi ## How it works -The repo is deliberately kept small and only really has a three files that matter: +The repo is deliberately kept small and only really has three files that matter: - **`prepare.py`** — fixed constants, one-time data prep (downloads training data, trains a BPE tokenizer), and runtime utilities (dataloader, evaluation). Not modified. - **`train.py`** — the single file the agent edits. Contains the full GPT model, optimizer (Muon + AdamW), and training loop. Everything is fair game: architecture, hyperparameters, optimizer, batch size, etc. **This file is edited and iterated on by the agent**.