Non-record: PR703 + shard-order curriculum + GPTQ cache-backout (1.1171)#783
Non-record: PR703 + shard-order curriculum + GPTQ cache-backout (1.1171)#783petergpt wants to merge 1 commit intoopenai:mainfrom
Conversation
Community Review — Non-record: PR703 + shard-order curriculum + GPTQ cache-backout (1.1171)BPB: (not parsed — see PR title) | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1183 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.06s, dim=512, layers=11, vocab=1024, code=111257 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.06s, dim=512, layers=11, vocab=1024, code=111257 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
This PR adds a non-record 16MB submission built on the PR703-style quant/cache-backout branch with a score-ranked shard curriculum.
Submitted as non-record because:
0.005-nat record threshold required for leaderboard record promotionResult
final_int6_sliding_window_exact = 1.11709895final_int6_roundtrip_exact = 1.14068680post_ema = 1.1368step_stop = 6918step_avg = 86.75mstotal submission size = 15,909,560bytes under cap = 90,440What changed relative to the forked PR703 base
The base PR703 local package was
1.11748714at15,963,300bytes. This submission keeps the same general object class and mainly changes:This is an incremental optimization package, not a new frontier architecture claim.
Included files
README.mdsubmission.jsontrain.logtrain_gpt.pyscore_shards.pySubmission folder:
records/track_non_record_16mb/2026-03-25_PR703_Curriculum_Carryover_1.1171