Skip to content

Record: 11L Depth Recurrence + Discriminative Pre-Quant TTT (8xH100) — val_bpb 1.0887 (3-seed mean)#1406

Open
aamodbhatt wants to merge 3 commits intoopenai:mainfrom
aamodbhatt:record-2026-04-06-depth-recurrence-prequant-ttt
Open

Record: 11L Depth Recurrence + Discriminative Pre-Quant TTT (8xH100) — val_bpb 1.0887 (3-seed mean)#1406
aamodbhatt wants to merge 3 commits intoopenai:mainfrom
aamodbhatt:record-2026-04-06-depth-recurrence-prequant-ttt

Conversation

@aamodbhatt
Copy link
Copy Markdown

Summary

Two innovations stacked on the PR #1351 base (Discriminative TTT + MuonEq-R + QK-Gain): (1) Depth Recurrence — blocks 4 and 5 run twice in the forward pass, giving 13 effective layer passes from 11 physical blocks at zero parameter overhead; (2) Discriminative Pre-Quant TTT — AdamW adaptation on val chunks before GPTQ quantization with per-block LR scaling (0.3× early to 1.0× late blocks, 10 epochs). Both baked into artifact; model frozen at eval.

Run Results (3 seeds)

Seed final_int6_sliding_window_exact val_bpb pre-quant val_bpb train time artifact size
1337 1.08769930 1.1399 599.1s 15,926,365 bytes
42 1.08824663 1.1371 599.1s 15,924,771 bytes
2025 1.09028840 1.1367 599.1s 15,914,559 bytes
mean 1.08874

Method Notes

  • NUM_LAYERS=11, BIGRAM_VOCAB_SIZE=1536, XSA_LAST_N=4
  • RECUR_ENABLED=1, RECUR_LAYERS=4,5 — blocks 4 and 5 run twice (depth recurrence)
  • TTT_ENABLED=1, TTT_LR=0.0005, TTT_EPOCHS=10 — discriminative pre-quant TTT
  • TTT_FREEZE_BLOCKS=0, TTT_COSINE_DECAY=1 — all blocks adapt, cosine LR decay
  • QK_GAIN_INIT=5.0 — learnable per-head Q scaling
  • MUON_WD=0.04, WARMDOWN_ITERS=3500, MAX_WALLCLOCK_SECONDS=599
  • NGRAM_EVAL_ENABLED=0
  • NGRAM_TWO_PASS_ENABLED=0
  • NGRAM_FULL_RESCORE=0
  • EMA_ENABLED=1, SWA_ENABLED=1, LATE_QAT=1, VE_ENABLED=1

Submission Checklist

  • One folder under records/track_10min_16mb/
  • Included README.md, submission.json, train_gpt.py, and train logs (3 seeds)
  • Training <= 600s
  • Eval <= 600s
  • Artifact <= 16,000,000 bytes
  • No tokenizer/dataset modifications
  • Score-first TTT (each chunk scored under inference_mode() before adaptation)
  • No eval-time weight adaptation — model frozen after training + dTTT + GPTQ
  • No n-gram, no two-pass, no external data lookup

aamodbhatt added 3 commits March 28, 2026 09:03
…179 (3-seed mean)

Two novel TTT innovations: (1) Muon-style Newton-Schulz orthogonalized updates
replace SGD in the TTT loop; (2) entropy-adaptive 2/3/4 epochs per chunk based
on globally-synced chunk NLL. 3-seed mean 1.1179, std 0.0002. All under 16MB/600s.
@MatoTeziTanka
Copy link
Copy Markdown

Community Review — Record: 11L Depth Recurrence + Discriminative Pre-Quant TTT (8xH100) — val_bpb 1.0887 (3-seed mean)

BPB: 1.0887 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern)

What I found in the code (head SHA dd52bdd22704, file records/track_10min_16mb/2026-03-28_MuonTTT_EntropyAdaptive_11L_8xH100/train_gpt.py):

The TTT path at line 1079 implements the score-first-per-chunk pattern: each chunk is scored under torch.no_grad() / inference_mode() before the base_model.train() + SGD adaptation runs on that same chunk, with an is_last_chunk guard so the final chunk gets no adaptation pass. This is the structural shape the legal frontier uses (PRs #1416 erichroepke, #1423 aryanbhosale).

Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk ci is scored under weights adapted only on chunks 0..ci-1. No prequant_ttt_adapt_adamw(val_tokens, ...) multi-epoch fine-tune, no scored-region SLOT, no target-in-key n-gram cache.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=93038 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass.

Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=93038 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants