Record: 11L LatentMask TTT + GPTQ + Product-Key Bigram + Brotli — val_bpb 1.1124 (3-seed mean)#1410
Conversation
…_bpb 1.1158 (3-seed mean)
The previous submission incorrectly used Flash Attention 2 (flash_attn.flash_attn_interface) while reporting FA3 in metadata. This commit updates all 3-seed results with actual FA3 (flash_attn_interface) runs on 8xH100. FA3 Hopper kernels reduced step_avg from ~102.9ms to ~91.5ms, yielding ~725 more training steps and improving val_bpb from 1.1158 to 1.1124 (3-seed mean). - train_gpt.py: import fixed in prior commit (db53912) - train_{777,999,1337}.log: replaced with FA3 run logs - submission.json: updated metrics, std, artifact sizes - README.md: updated results table, dependencies (flash-attn-3)
1119222 to
6730fb5
Compare
|
Updated results with Flash Attention 3. FA3 Hopper kernels brought step_avg down from ~103ms to ~91.5ms, gaining ~725 extra steps and improving val_bpb from 1.1158 to 1.1124 (3-seed mean) :) |
Community Review — Record: 11L LatentMask TTT + GPTQ + Product-Key Bigram + Brotli — val_bpb 1.1124 (3-seed mean)BPB: 1.1124 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 708 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=11, vocab=1024, code=61917 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=11, vocab=1024, code=61917 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
11L GPT with two novel techniques — LatentMask TTT and Product-Key Bigram — plus Brotli-11 compression and Flash Attention 3, achieving a 3-seed mean val_bpb of 1.1124 on 8xH100.
What's New in This Submission
embed_prev(1024,512) * embed_cur(1024,512)— zero hash collision, no projection layer. Cleaner and more parameter-efficient than hash-based BigramHash.Results (8xH100, 3-seed)
Test plan
torchrun --standalone --nproc_per_node=8 train_gpt.pyon 8xH100