Record: Seed-Regenerated Random Model + Incremental N-gram Cache — val_bpb 0.0905#1095
Record: Seed-Regenerated Random Model + Incremental N-gram Cache — val_bpb 0.0905#1095vimeto wants to merge 1 commit intoopenai:mainfrom
Conversation
776a620 to
38c5e7d
Compare
Community Review — Record: Seed-Regenerated Random Model + Incremental N-gram Cache — val_bpb 0.0905BPB: 0.0905 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1374 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.08s, dim=512, layers=5, vocab=1024, code=103740 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.08s, dim=512, layers=5, vocab=1024, code=103740 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Record: Seed-Regenerated Random Model + Incremental N-gram Cache — val_bpb 0.0905
val_bpb = 0.0905 (1 seed, additional seeds pending H100 access) | 15.09 MB | 8xH100 SXM
Results (8xH100 80GB SXM, PyTorch 2.7.1)
Additional seeds pending H100 access.
Key Innovation: Zero-Cost Base Weights
ALL transformer weight matrices use frozen orthogonal random projections regenerated from 8-byte seeds at load time (0 bytes in artifact). Only rank-64 LoRA adapters are stored (3.9 MB). The remaining 11 MB holds an incrementally-built INT16 n-gram cache (orders 2-7, 31B counts, 8-GPU all-reduce synced).
Why orthogonal: Prior work (PR #874) used Gaussian random bases but could not train past 5 layers. Our QR-decomposed orthogonal init preserves singular values = 1.0, enabling stable deep training.
Adapter quantization: Simple INT8 per-row gives quant gap of only +0.003 BPB (vs +0.006 for baseline INT6 GPTQ).
Incremental N-gram Cache (Zero Overhead)
The cache is built during training by calling update_batch_fast() after each microstep (less than 1ms overhead). After training, counts are all-reduced across 8 GPUs and LZMA-compressed into the artifact. At eval, the cache is frozen, no TTT.
We tested pre-filling from training shards at startup: 10x worse (0.996 BPB) due to pre-fill consuming 24-33% of the training budget.
Architecture
5L 512d, 8H/4KV, MLP 3.0, LeakyReLU(0.5) squared, rank-64 LoRA adapters, tied embeddings, vocab 1024
Credits