Record: SP4096 + Linear LR + Depth Recurrence -- val_bpb=1.0924 (3-seed mean)#1395
Record: SP4096 + Linear LR + Depth Recurrence -- val_bpb=1.0924 (3-seed mean)#1395dttdrv wants to merge 2 commits intoopenai:mainfrom
Conversation
…3-seed mean) 11L SP4096 transformer with depth recurrence (L4,5), parallel residuals (L7+), MuonEq-R, QK-Gain 5.0, all-int6 GPTQ, Brotli-10. Key change: linear warmdown to LR=0 (replacing cosine decay to 0.05 floor). This reduces the quantization gap by 61% (0.038 to 0.014 BPB) and pruning by 82%, producing a 0.022 BPP improvement over merged SOTA (PR openai#1019). No TTT, no SLOT, no n-gram, no eval-time adaptation.
Community Review — Record: SP4096 + Linear LR + Depth Recurrence -- val_bpb=1.0924 (3-seed mean)BPB: 1.0924 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1477 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 4.06s, dim=512, layers=11, vocab=4096, code=90804 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 4.06s, dim=512, layers=11, vocab=4096, code=90804 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
val_bpb: 1.0924 (3-seed mean, std 0.0004) | 15.99 MB artifact | 8xH100 SXM, 600s
No TTT, no SLOT, no n-gram cache, no eval-time adaptation. All four conditions from Issue #1017 satisfied. Evaluation is pure sliding-window at stride=64.
Improvement over current SOTA (PR #1019, 1.1147 BPB): -0.0223 BPB (Welch t=-68.85, df=3.84, p << 0.001)
Results
Key Change
Linear warmdown to LR=0 (replacing cosine decay to 0.05 floor). This reduces the quantization gap by 61% (0.038 to 0.014 BPB) and pruning by 82%, by allowing weights to fully settle before GPTQ runs.
Architecture
11-layer, 512-dim SP4096 transformer with:
Run Command
Test Plan