Record: Split-LR + N-gram Agreement + Full GPTQ — val_bpb 1.1079 (3-seed mean)#1302
Record: Split-LR + N-gram Agreement + Full GPTQ — val_bpb 1.1079 (3-seed mean)#1302vlivashkin wants to merge 2 commits intoopenai:mainfrom
Conversation
76168cd to
3273d7f
Compare
Community Review — Record: Split-LR + N-gram Agreement + Full GPTQ — val_bpb 1.1079 (3-seed mean)BPB: 1.1079 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 410 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=71339 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=71339 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
SOTA (PR #1019, 3-seed mean): 1.8822 nats. This run: 1.8752 nats. Delta: -0.00697 nats. Clears the 0.005-nat threshold.
What's New vs PR #1019
Training (from PR #1179): Split-LR (early=0.025, late=0.030), BigramHash(2816×160), Sigmoid-gated U-Net, Soft-round QAT (alpha 1→16), Brotli-11 + byte-shuffle, Coprime-stride loader
Evaluation: Online n-gram agreement — 3 causal experts (token 16-gram, within-word, word-start) with agreement boosting. Adjusts LLM probabilities via properly normalized exponential tilting. Contributes −0.0028 BPB.
Results (8×H100 SXM, no TTT)
Compliance
Reproduction
See README.md for full details.
Credits