Interesting project. I've been measuring how context quality affects Claude Code output and noticed something relevant to reflection workflows.
When the agent has 180K tokens of context (most of it irrelevant file reads), both the initial output and the reflection pass are noisier. When context is pre-filtered to ~50K relevant tokens, the initial output improves enough that reflection catches more subtle issues instead of correcting noise-induced errors.
In other words: cleaner input context makes reflection more productive, not just cheaper.
Data: vexp.dev/benchmark
Interesting project. I've been measuring how context quality affects Claude Code output and noticed something relevant to reflection workflows.
When the agent has 180K tokens of context (most of it irrelevant file reads), both the initial output and the reflection pass are noisier. When context is pre-filtered to ~50K relevant tokens, the initial output improves enough that reflection catches more subtle issues instead of correcting noise-induced errors.
In other words: cleaner input context makes reflection more productive, not just cheaper.
Data: vexp.dev/benchmark