Problem
Right now, the AI researcher relies on overall step time and calculated MFU to guess if an architectural change is faster. To make serious strides in optimizing PyTorch graphs or custom kernels, the agent needs visibility into per-kernel execution times.
Proposal
Add a --profile flag to train.py that utilizes torch.profiler. Instead of outputting a binary JSON trace (which the LLM cannot effectively read), it should output a summarized Markdown table of the top 10 most expensive CUDA kernels (e.g., FlashAttention vs. MLP vs. AllReduce) and their memory allocations. This will act as the LLM's "eyes" into actual hardware bottlenecks.