Skip to content

12 layers GPT | MLP_MULT reduction | VE and BIGRAM modifications#845

Draft
rubenbalbastre wants to merge 3 commits intoopenai:mainfrom
rubenbalbastre:submission/12_layers
Draft

12 layers GPT | MLP_MULT reduction | VE and BIGRAM modifications#845
rubenbalbastre wants to merge 3 commits intoopenai:mainfrom
rubenbalbastre:submission/12_layers

Conversation

@rubenbalbastre
Copy link
Copy Markdown

Analysis conducted taking as base model: [2026-03-23_LeakyReLU_LegalTTT_ParallelMuon] record and trying to extend to 12 layers. Modification of token embedding dimension, bigram vocab size and dimension, MLP_MULT.

No improvement over SOTA but notes for the community.

Currently working on it

@MatoTeziTanka
Copy link
Copy Markdown

Community Review — 12 layers GPT | MLP_MULT reduction | VE and BIGRAM modifications

BPB: (not parsed — see PR title) | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern)

What I found in the code (head SHA 142cbdf666c7, file records/track_10min_16mb/2026-03-26_12L/train_gpt.py):

The TTT path at line 1074 implements the score-first-per-chunk pattern: each chunk is scored under torch.no_grad() / inference_mode() before the base_model.train() + SGD adaptation runs on that same chunk, with an is_last_chunk guard so the final chunk gets no adaptation pass. This is the structural shape the legal frontier uses (PRs #1416 erichroepke, #1423 aryanbhosale).

Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk ci is scored under weights adapted only on chunks 0..ci-1. No prequant_ttt_adapt_adamw(val_tokens, ...) multi-epoch fine-tune, no scored-region SLOT, no target-in-key n-gram cache.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.09s, dim=512, layers=12, vocab=1024, code=89468 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass.

Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.09s, dim=512, layers=12, vocab=1024, code=89468 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants