Non-record: 11L GEPA + 20k Steps + Pure Int6 + Legal TTT (val_bpb=1.0983): unlimited compute: 4×A100-40GB, ~2.8 hours#628
Conversation
- Non-record unlimited-compute submission: val_bpb=1.0983 (below 1.10) - 20000-step training (12000 peak-LR + 8000 warmdown) on 4xA100-40GB - Pure int6 per-row quantization with 15-candidate GPTQ-lite + zstd-22 - Legal score-first TTT (SGD, 10 epochs, momentum 0.9): -0.044 BPP gain - Float base 1.1153, artifact 14.29 MB (14,985,742 bytes)
- Finding 1: Warmdown is a first-class variable (6x late-plateau rate) - Finding 2: Better-trained models compress smaller - Finding 3: SGD >> AdamW for legal TTT (2.4x gain, same base) - Finding 4: Freeze-early-layers is active regularization - Finding 5: After right TTT family, invest in base model - What transfers to record track section - Open frontiers section
Community Review — Non-record: 11L GEPA + 20k Steps + Pure Int6 + Legal TTT (val_bpb=1.0983): unlimited compute: 4×A100-40GB, ~2.8 hoursBPB: 1.0983 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 399 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.09s, dim=512, layers=11, vocab=1024, code=78281 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.09s, dim=512, layers=11, vocab=1024, code=78281 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
11L GEPA + 20k Steps + Pure Int6 + Legal TTT → 1.0983 BPP
Non-record unlimited-compute submission. Breaks 1.10 BPP with legal score-first TTT.
Key Numbers
Scaling Table
Research Contributions (5 transferable findings)
Warmdown is a first-class variable — The model plateaus at ~1.216 BPP during late peak-LR (steps 7k–12k, ~2 BPB/kstep), then warmdown delivers −0.101 BPB at 12.6 BPB/kstep — roughly 6× the late-plateau rate. Warmdown isn't cleanup; it's where most remaining gain originates once the plateau sets in.
Better-trained models compress smaller — 20k-step model → 14.29 MB (smallest artifact), despite identical architecture and quantization. Optimization quality improves weight compressibility, not just float loss.
SGD >> AdamW for legal TTT (controlled comparison) — On the same 5.2k-step, 24.6M-param base model: SGD+momentum delivers 2.4× the TTT gain of AdamW (−0.017 vs −0.007 float→final). Adam's moments can't converge in ~30 steps/chunk. Separately, the 20k GEPA model's −0.044 TTT gain is measured from a different baseline (quant→final) and different architecture, so should not be directly compared.
Freezing early layers is active regularization — Freezing 2 of 11 blocks (~18% of depth) during TTT isn't just catastrophic-forgetting defense. Early layers hold generic features; later layers are the better adaptation surface. Even though freezing removes parameters, the model adapts better.
After the right TTT family, invest in the base model — TTT's share of total gain over naive baseline shrinks from 22% (5.2k-step base) to 13% (20k-step base). The big jump came from choosing the right TTT regime (SGD + freeze + multi-epoch). After that, base model quality delivers more BPB per unit of effort than TTT micro-tuning.
What Transfers to Record Track
✅ Warmdown emphasis (≥40% of total steps)
✅ GPTQ-lite / pure int6
✅ SGD-based legal TTT (2.4× gain over AdamW, validated on same base)
✅ Freeze-early-blocks as TTT regularization
Open Frontiers
The local TTT recipe appears mostly saturated. Next questions are structural: stream vs. document-based adaptation, self-distillation at test time, quantization-aware TTT, and base-training scaling laws under fixed 16 MB budget.
Full analysis with all tables and derivations in README.md.
Prior Non-Record Submissions
Acknowledgments
Builds on techniques from: @signalrush (PR #414, GPTQ-lite/EMA), @jfprincz (PRs #287/#315, XSA/Partial RoPE/LN Scale), @unnir (PR #265, Efficient XSA), raahilshah (PR #162, SmearGate/BigramHash), @aruniyer (PR #86, Int6 QAT), samacqua (LoRA TTT), @abaybektursun (PR #549, LeakyReLU²), and the OpenAI baseline.