Record: 11L Parallel Muon + N-gram Backoff Cache — val_bpb 0.2841 (3-seed mean)#865
Record: 11L Parallel Muon + N-gram Backoff Cache — val_bpb 0.2841 (3-seed mean)#865aryanbhosale wants to merge 1 commit intoopenai:mainfrom
Conversation
Community Review — Record: 11L Parallel Muon + N-gram Backoff Cache — val_bpb 0.2841 (3-seed mean)BPB: 0.2841 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 636 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.15s, dim=512, layers=11, vocab=1024, code=93397 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.15s, dim=512, layers=11, vocab=1024, code=93397 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Record: 11L Parallel Muon + N-gram Backoff Cache
val_bpb = 0.2841 (3-seed mean, std 0.0001) | ~15.85 MB | 8×H100 SXM
3-Seed Results (8×H100 80GB SXM, PyTorch 2.9.1+cu128)
Key Innovation: N-gram Backoff Cache (Eval-Time Only)
Order 2-9 backward-looking N-gram cache with entropy-adaptive alpha blending and 65K-token chunk updates:
N-gram reduces BPB by 4x (1.1274 -> 0.2841) by exploiting repeated phrases and patterns.
Architecture (26.8M params)
11L 512d, 8H/4KV (GQA), MLP 3x LeakyReLU(0.5)², Parallel Muon (parameter banking + batched NS5), SmearGate, BigramHash(1024), Value Residual, Gated Attention, XSA4, Partial RoPE(16/64), EMA(0.997)+SWA, Late QAT, GPTQ-lite int6+zstd-22, FA3, torch.compile(fullgraph=True).
Timing
Compliance
Credits