Record: 11L Muon Legal TTT + Entropy-Adaptive Epochs (8×H100) — val_bpb 1.1179 (3-seed mean)#1148
Conversation
…179 (3-seed mean) Two novel TTT innovations: (1) Muon-style Newton-Schulz orthogonalized updates replace SGD in the TTT loop; (2) entropy-adaptive 2/3/4 epochs per chunk based on globally-synced chunk NLL. 3-seed mean 1.1179, std 0.0002. All under 16MB/600s.
- Experiment log: complete Round 4 int5 sweep results (11 experiments) Key finding: int5 QAT is a dead end, all top submissions use int6 - Updated leaderboard state: PR#1019 merged (1.1147), open PRs up to openai#1148 - Updated hypotheses: H-C01 rejected, new SOTA target ≤1.1097 - Merged origin/main (PR#1019 + new records) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Community Review — Record: 11L Muon Legal TTT + Entropy-Adaptive Epochs (8×H100) — val_bpb 1.1179 (3-seed mean)BPB: 1.1179 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1079 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=93038 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=93038 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
Two novel TTT innovations on the SOTA base stack (PR #399 + PR #414 + PR #461): Muon-style Newton-Schulz orthogonalized updates replace SGD in the TTT loop, and entropy-adaptive epoch selection concentrates adaptation budget on harder content. Beats current SOTA (1.1194) with a 3-seed mean of 1.1179.
Run Results (3 seeds)
legal_ttt_exact val_bpblegal_ttt_exact val_lossval_bpb1.117650301.887100721.1366599.1s477.9s15,944,410 bytes1.118129291.887909471.1371599.1s485.3s15,873,826 bytes1.117899341.887521211.1367599.1s479.2s15,879,042 bytes1.11789Method Notes
NUM_LAYERS=11,BIGRAM_VOCAB_SIZE=1536,XSA_LAST_N=4TTT_ENABLED=1, score-first pathTTT_MUON=1— Newton-Schulz orthogonalized updates in TTT loop (NS steps=3)TTT_ENTROPY_ADAPT=1— entropy-adaptive 2/3/4 epochs per chunk (H_HIGH=2.1, H_LOW=1.75)TTT_LR=0.002,TTT_EPOCHS=3,TTT_CHUNK_TOKENS=32768NGRAM_EVAL_ENABLED=0NGRAM_TWO_PASS_ENABLED=0NGRAM_FULL_RESCORE=0EMA_ENABLED=1,SWA_ENABLED=1,LATE_QAT=1,VE_ENABLED=1WARMDOWN_ITERS=3500,MAX_WALLCLOCK_SECONDS=599Submission Checklist
records/track_10min_16mb/README.md,submission.json,train_gpt.py, and train logs (3 seeds)