Skip to content

Non-record: 6-Technique Stack — Catalytic Residuals + Value Residual + Gated Attention + BigramHash(10240) + 12L (val_bpb=1.1690)#474

Open
joshuaswarren wants to merge 1 commit intoopenai:mainfrom
joshuaswarren:submission/6-technique-stack
Open

Non-record: 6-Technique Stack — Catalytic Residuals + Value Residual + Gated Attention + BigramHash(10240) + 12L (val_bpb=1.1690)#474
joshuaswarren wants to merge 1 commit intoopenai:mainfrom
joshuaswarren:submission/6-technique-stack

Conversation

@joshuaswarren
Copy link
Copy Markdown

Non-record: 6-Technique Stack (val_bpb=1.1690, 8xH100 SXM)

First submission combining 6 independently-proven architecture improvements never stacked together:

Technique Source Impact
Catalytic Residuals PR #450 -0.024 bpb
Value Residual (ResFormer) PR #413, arXiv:2410.17897 -0.015 bpb
Gated Attention PR #413, arXiv:2505.06708 -0.003 bpb
BigramHash(10240) PR #450 -0.070 bpb vs 2048
12 Layers PR #450 -0.023 bpb vs 11L
3x MLP Merged SOTA Standard

Results

  • Sliding window val_bpb: 1.1690 (stride 64)
  • Post-quant roundtrip: 1.2043
  • Pre-quant: 1.1911
  • 6,981 steps at 85.78 ms/step
  • Artifact: 15.3 MB (int6+zstd, under 16MB)
  • Training: 598.8s on 8xH100 SXM

Additional techniques

OrthoInit, Muon WD=0.04, SWA (last 20%), Late QAT (threshold 0.25), logit softcap 30.0

Run

pip install sentencepiece zstandard
python3 data/cached_challenge_fineweb.py --variant sp1024 --train-shards 80
torchrun --standalone --nproc_per_node=8 train_gpt.py

All hyperparameters set as defaults.

…+ Gated Attention + BigramHash(10240) + 12L (val_bpb=1.1690)

First submission combining 6 independently-proven architecture improvements:
- Catalytic Residuals (PR openai#450, -0.024 bpb)
- Value Residual/ResFormer (PR openai#413, -0.015 bpb)
- Gated Attention (PR openai#413, -0.003 bpb)
- BigramHash(10240) (PR openai#450, -0.070 bpb vs 2048)
- 12 Layers (-0.023 bpb vs 11L)
- 3x MLP

8xH100 SXM: 6981 steps, 85.78ms/step, 15.3MB artifact (int6+zstd)
@mohosy
Copy link
Copy Markdown

mohosy commented Mar 23, 2026

value residual + gated attention on 12 layers is alot of new stuff at once lol. did you try adding them one at a time to see which ones actualy helped or did you just yolo the whole stack

@MatoTeziTanka
Copy link
Copy Markdown

Community Review — Non-record: 6-Technique Stack — Catalytic Residuals + Value Residual + Gated Attention + BigramHash(10240) + 12L (val_bpb=1.1690)

BPB: 1.1690 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA cd3b1385d524, file records/track_10min_16mb/2026-03-22_6TechniqueStack_CatalyticRes_ValueRes_GatedAttn_BigramHash10240/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=12, vocab=1024, code=60630 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.02s, dim=512, layers=12, vocab=1024, code=60630 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants