RFC: How to Clean Up All the Parameter Golf Submissions#886
RFC: How to Clean Up All the Parameter Golf Submissions#886abaybektursun wants to merge 11 commits intoopenai:mainfrom
Conversation
Study of eval-time n-gram caching — a technique that reduces BPB from 1.11 to 0.38 while preserving strict causality, costing zero artifact bytes, but growing the effective model to 17x the artifact limit. Includes single-GPU ablations, 8-GPU all-reduce results (0.49 BPB in 401s, under 600s budget), alpha sweep, and a comparison of competition eval setup vs real-world inference constraints. Proposes four rule clarifications to align the competition with deployment realities. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
This puts a clear name on something I've been navigating by feel: the line between approved eval-time learning and unbounded model growth is quantitative, not qualitative, and right now nobody knows where it is. |
- Base model is ValCalib GPTQ (1.1142 BPB), not PR openai#549 (1.1194) - Remove stale "not yet deployed" / "we estimate" for EXP-11 - Note α=0.80 (939s) exceeds 600s budget - Fix PR openai#727 score to 0.9674, PR openai#788 to 0.9059 - Fix PR openai#596 BPB to 0.6430 - "Approved" → "Technique deemed legal" for closed PRs - Add bucket sweep and per-token overhead proposal - Replace "neural" with "base LM" throughout Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Decompressed model weights alone exceed any naive GPU memory cap. The right constraint is auxiliary state: tensors that accumulate during eval and are not derivable from the artifact (hash tables, TTT deltas). Not model weights, KV cache, or activations. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
strong support for this RFC. The fact that a frequency table with zero training beat every trained model just proves this thing is measuring dataset memorization, not language modeling quality. We've been pushing neural improvements — GPTQ, QAT, architecture even novel stuff, and it's demoralizing to see lookup tables dominate. |
|
Correction: our original explanation of why hash collisions help was wrong. Credit to @Eppie (#677 comment) for identifying the probability validity issue, and to Mirco on Discord for the Our bucket sweep data is correct, but the mechanism is different from what we originally described. The hash ratio PR body and README updated to reflect this. |
Credit to @Eppie and Mirco (Discord) for the correct formulation. The hash ratio is not a conditional probability — it approaches 1.0 as collision-aggregated counts fill both tables proportionally. The BPB improvement is a measurement artifact from point-evaluating an invalid distribution, not from useful statistical estimation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix 1: verify sum(probs) ≈ 1.0 at every scored position Fix 2: cap auxiliary eval-time state ≤ 32 MB Fix 3: cap per-token overhead ≤ 1.5× base model Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…tion Causality is assumed but not enforced by the eval harness. Two-pass rescoring violates it. Should be explicit. Bucket sweep moved from experimental details to the main argument since it proves the BPB scores are inflated. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ce it Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add causality and distribution validity to real-world comparison. Explain how unbounded eval-time state can be exploited even with valid distributions and causality (self-distillation, ensembling, neural cache). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
README now points to blog + PR instead of maintaining a third copy. submission.json: fix base_model_pr 549→728, update name and blurb. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📄 Full article with charts: abay.tech/posts/eval-time-model-growth
The distribution doesn't sum to 1
N-gram caching recently pushed reported scores below 0.5 BPB. We dug into the numbers and found a cool catch: the probability distribution sums to ~410, not 1.
The blend
(1-α) * p_model + α * P(cache_bin)is only computed for the correct token. The other 1,023 tokens are never checked. If they were, the distribution would sum to ~410, not 1.0.Why: the cache stores two hash tables per n-gram order: one counts how often each context appears, one counts how often each (context, token) pair appears. Their ratio —
full_table[hash(ctx, tok)] / ctx_table[hash(ctx)]— is meant to approximateP(tok | ctx). But because context-only and context+token hash to independent bucket indices, the ratio doesn't track token frequency. With 1M buckets and 62M tokens, each bucket averages ~62 entries. The ratio of two similarly-populated buckets approaches 1.0 for ALL tokens. This isP(cache_bin), notP(tok | ctx).The 1-bucket proof:
P(cache_bin) = T/T = 1.0for every lookup. Withα = 1, BPB = 0. Perfect score — which tells us the metric isn't measuring what we think.For any token the model predicts better than uniform (p > 1/K), renormalization strictly decreases its probability. The n-gram contribution doesn't just wash out — it actively hurts.
Credit to @Eppie (#677 comment) for identifying the probability validity issue, and to Mirco on Discord for the
P(cache_bin)formulation.Bucket sweep (empirical confirmation)
All configs use backoff 2-7 with entropy-adaptive α. 256M buckets (near collision-free) scores 1.1123, near the float baseline (1.1109). The "improvement" tracks collision density, not prediction quality.
The n-gram-only configuration — hash tables with no neural model — reports 1.0615, below the neural baseline at 1.1109. A frequency table with no learned parameters appears to outcompress a trained language model. This is only possible because the number being reported is not measuring compression.
Two-pass rescoring compounds the problem
PRs #846, #853, #868, #870, #881, #888 use two passes: pass 1 scores tokens and builds the cache, pass 2 rescores ALL tokens using the complete cache. This violates causality and compounds the distribution issue.
A separate question: what should the competition measure?
The distribution issue above is a measurement bug — it applies regardless of what anyone thinks the competition should optimize for. What follows is a design conversation. Reasonable people can disagree.
Even with valid distributions and preserved causality, the model can grow unboundedly at eval time. Someone could train a second, larger model via self-distillation, ensemble 8 copies via divergent TTT, or store 63 GB of hidden states as a neural cache. All valid. All causal. All far beyond 16 MB. The competition gives evaluation 8×H100 and 600s for a 16 MB model. In deployment, inference is constrained by hardware cost, latency, and concurrent users. Whether the competition should reflect those realities is an interesting design choice.
Proposed fixes
@0hq @cocohearts @valerio-oai
1. Verify the distribution sums to 1
fixing the measurementOne
torch.sumper position. 1–2 seconds for 62M tokens. Catches every invalid distribution. Passes everything valid. Not n-gram specific.2. Make causality an explicit rule
aligning with realityThe FAQ says you can only train on tokens "you've already evaluated your model on." Two-pass rescoring violates this. Making it a stated rule would clarify the intent.
3. Cap auxiliary eval-time state
aligning with realityConstrain auxiliary state: tensors that accumulate during eval and are not derivable from the artifact alone. Not model weights, not KV cache, not activations. A cap of ≤ 32 MB preserves everything currently approved (TTT LoRA at ~2 MB).
4. Cap per-token overhead
aligning with realityEval-time techniques must not increase per-token latency by more than 50% over the base model forward pass. Base LM on 8×H100 takes 110s. A 1.5× cap means 165s max. The n-gram cache takes 401s (3.6×).
What do you think?
Experimental appendix
All BPB numbers below are from an invalid distribution. They measure how much
P(cache_bin)inflates the correct token's probability, not compression quality.Single GPU (stride=64, FineWeb val, 62M tokens):
8×H100 with all-reduce sync (first three under 600s budget):
Alpha sweep — higher α = more weight on inflated ratio = lower reported BPB. Tracks α, not prediction quality.
Order scaling — saturates around order 9–12. Each additional order changes which hash ratio is used for each token.
Stride decomposition — the artifact magnitude (~0.62 BPB) is independent of sliding window stride.
Base model: PR #728. Reproduction scripts:
experiments/eval_time_mixing/scripts/.Test plan
🤖 Generated with Claude Code