Record: LeakyReLU(0.9)² + N-gram Cache + Entropy-Reg QAT — val_bpb 0.9958 (3-seed mean)#885
Record: LeakyReLU(0.9)² + N-gram Cache + Entropy-Reg QAT — val_bpb 0.9958 (3-seed mean)#885lolrazh wants to merge 1 commit intoopenai:mainfrom
Conversation
…9958 (3-seed mean) 3-seed mean: 0.9958 BPB (std 0.0017). Seeds 1337/42/2025: 0.9977/0.9947/0.9949. Built on PR #549 stack + three additions: - Backward-looking 7-gram eval cache (alpha=0.2, score-first, ~98% hit rate) - Entropy-regularized QAT (halves quant gap: 0.009 vs 0.017) - Mixed int5/int6 quantization (front3_back1_6_middle5) + per-row GPTQ-lite - LeakyReLU(0.9)² (+0.013 BPB vs 0.5 slope) All artifacts under 16MB (~14.0 MB). All eval under 10 min (~552s TTT+ngram). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Community Review — Record: LeakyReLU(0.9)² + N-gram Cache + Entropy-Reg QAT — val_bpb 0.9958 (3-seed mean)BPB: 0.9958 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1413 dexhunter pattern) What I found in the code (head SHA The TTT path at line 1225 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.21s, dim=512, layers=11, vocab=1024, code=101270 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.21s, dim=512, layers=11, vocab=1024, code=101270 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Record: LeakyReLU(0.9)² + N-gram Cache + Entropy-Reg QAT — val_bpb 0.9958
val_bpb = 0.9958 (3-seed mean, std 0.0017) | ~14.0 MB | 8×H100 SXM
3-Seed Results (8×H100 80GB SXM, PyTorch 2.9.1+cu128)
What's New
Backward-looking 7-gram eval cache (alpha=0.2, score-first, ~98% hit rate) — exploits FineWeb's repetitive n-gram structure. Cache starts empty, builds from scored val tokens only. No oracle, no training data access during eval.
Entropy-regularized QAT — penalty term pushes weights toward quantization grid during warmdown. Halves quant gap (0.009 vs 0.017 BPB).
Mixed int5/int6 quantization (
front3_back1_6_middle5) — int6 for sensitive layers (first 3 + last 1), int5 for middle. Combined with per-row GPTQ-lite clip search.LeakyReLU(0.9)² — slope 0.9 beats 0.5 by 0.013 BPB (controlled sweep, issue Parameter Golf Formerly Live AI Commentary ⛳ + Analysis / Ideas | every 10 minutes. Now disabled #140).
Score-first TTT (PR Record: LeakyReLU² + Legal Score-First TTT + Parallel Muon — val_bpb 1.1194 (3-seed mean) #549 recipe) — SGD(lr=0.002, mom=0.9), 3 epochs per 32K chunk, all blocks unfrozen.
Timing Note
The logs show a redundant standalone sliding window eval (~75-98s) that ran before TTT. This is redundant because TTT includes its own sliding window scoring — the standalone eval's BPB is not the reported score. Without it, eval time is 576-581s (within 600s budget). Full explanation in the README.
Credits