Non-record: 11L Int6 + XSA + TTT + SmearGate + BigramHash (pending compute)#277
Non-record: 11L Int6 + XSA + TTT + SmearGate + BigramHash (pending compute)#277mohosy wants to merge 6 commits intoopenai:mainfrom
Conversation
…mpute) Combines XSA (last 3 layers) and TTT (3-epoch SGD) on top of the full competitive meta stack. Score pending 8xH100 validation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ver 16MB) 8xH100 SXM, 600s, 7723 steps. Sliding window eval stride=64. Artifact 16.17MB — needs WD bump from 0.04 to ~0.05 for valid submission. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
EMA gives smoother weight averaging vs periodic SWA checkpoints. WD=0.042 targets ~15.5MB artifact (under 16MB limit). XSA on last 4 layers matches latest top submissions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Major rewrite based on latest meta (PRs openai#398, openai#442, openai#462): - SwiGLU FFN with Star-ReLU (hidden=1792) - U-Net skip connections with learned gating - EMA (decay=0.9985) replacing SWA - AdamW TTT (legal score-first protocol) - Partial RoPE (16 dims) - LN Scale (1/sqrt(layer_idx+1)) - BigramHash(8192) + SmearGate - GPTQ-lite quantization - DDP compile fix for multi-GPU Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…uals + AR GPTQ Incorporating latest frontier techniques. Verified runs coming mid-April. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Community Review — Non-record: 11L Int6 + XSA + TTT + SmearGate + BigramHash (pending compute)BPB: 0.002 (cache parse — may be delta/std, not val_bpb; check PR title) | Compliance: FLAG — Pre-Quant TTT runs multi-epoch on What I found in the code (head SHA At line 936 the pre-quant TTT function takes Per Issue #402 and Issue #677 (@valerio-oai, 2026-03-27), TTT is valid only if each token is scored BEFORE the adapter trains on it; multi-epoch TTT that scores only on the final pass is explicitly called out as invalid. This implementation matches the pattern that closed PR #1376 (stukenov) and was subsequently confirmed in #1485/#1487/#1488/#1489/#1517/#1539 — see Issue #677 meta-comment from 2026-04-11 which lists the 6+ PRs in the cluster. Contrast with the legal Pre-Quant TTT pattern (e.g. PR #1416 / PR #1423 lineage): those train the adapter on a held-out slice of training data (not CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=64120 B, SMOKE_TEST_PASS Verdict: COMPLIANCE FLAG — same pattern as the closed Pre-Quant TTT cluster. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: CLOSE under the same ruling as #1376 and the rest of the cluster. A resubmission with the TTT function taking a training-data slice instead of Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.03s, dim=512, layers=11, vocab=1024, code=64120 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
Combines the two strongest eval-time techniques (XSA + TTT) on top of the full competitive meta stack. Score pending 8xH100 validation — applying for compute grant.
Checklist
records/track_10min_16mb/submission folderREADME.md,submission.json,train_gpt.py