Skip to content

Record: AR Self-Gen GPTQ + XSA-11 + BigramHash3072x112 (mean 1.1156)#1280

Open
aamodbhatt wants to merge 2 commits intoopenai:mainfrom
aamodbhatt:record/legal-arselfgen-2026-04-03
Open

Record: AR Self-Gen GPTQ + XSA-11 + BigramHash3072x112 (mean 1.1156)#1280
aamodbhatt wants to merge 2 commits intoopenai:mainfrom
aamodbhatt:record/legal-arselfgen-2026-04-03

Conversation

@aamodbhatt
Copy link
Copy Markdown

@aamodbhatt aamodbhatt commented Apr 3, 2026

Summary

  • Adds a new submission folder under records/track_10min_16mb/2026-04-03_ARSelfGenGPTQ_XSA11_Bigram3072_8xH100
  • Strict legal path: single-pass causal eval, no two-pass/full-rescore, no tokenizer/dataset changes, no TTT/SLOT/ngram scoring path
  • Includes required files: README.md, submission.json, train_gpt.py, and train logs

Results

  • seed 42: 1.11505464 (15,852,402 bytes)
  • seed 1337: 1.11613083 (15,856,186 bytes)
  • seed 2025: 1.11546497 (15,847,666 bytes)
  • 3-seed mean: 1.11555015, std: 0.00044346

aamodbhatt added 2 commits March 28, 2026 09:03
…179 (3-seed mean)

Two novel TTT innovations: (1) Muon-style Newton-Schulz orthogonalized updates
replace SGD in the TTT loop; (2) entropy-adaptive 2/3/4 epochs per chunk based
on globally-synced chunk NLL. 3-seed mean 1.1179, std 0.0002. All under 16MB/600s.
@aamodbhatt aamodbhatt changed the title Non-record: AR Self-Gen GPTQ + XSA-11 + BigramHash3072x112 (mean 1.1156) Record: AR Self-Gen GPTQ + XSA-11 + BigramHash3072x112 (mean 1.1156) Apr 3, 2026
@MatoTeziTanka
Copy link
Copy Markdown

Community Review — Record: AR Self-Gen GPTQ + XSA-11 + BigramHash3072x112 (mean 1.1156)

BPB: (not parsed — see PR title) | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern)

What I found in the code (head SHA 16196382e498, file records/track_10min_16mb/2026-03-28_MuonTTT_EntropyAdaptive_11L_8xH100/train_gpt.py):

The TTT path at line 1079 implements the score-first-per-chunk pattern: each chunk is scored under torch.no_grad() / inference_mode() before the base_model.train() + SGD adaptation runs on that same chunk, with an is_last_chunk guard so the final chunk gets no adaptation pass. This is the structural shape the legal frontier uses (PRs #1416 erichroepke, #1423 aryanbhosale).

Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk ci is scored under weights adapted only on chunks 0..ci-1. No prequant_ttt_adapt_adamw(val_tokens, ...) multi-epoch fine-tune, no scored-region SLOT, no target-in-key n-gram cache.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.08s, dim=512, layers=11, vocab=1024, code=93038 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass.

Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.08s, dim=512, layers=11, vocab=1024, code=93038 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants