Record: TMA Megakernel + Improved Parallel Residuals + Tap-In min_match=1 — val_bpb 1.07636 (3-seed mean)#1555
Open
andrewbaggio1 wants to merge 2 commits intoopenai:mainfrom
Conversation
8 Gated DeltaNet layers + 2 softmax attention layers. GDN is mathematically equivalent to E2E TTT-Linear with MSE loss. First competitive GDN hybrid in the 10-min budget. Targets bounty items: E2E TTT + State-space models. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ch=1 — val_bpb 1.07636 (3-seed mean) 3-seed mean 1.07636 BPB (std 0.0006), delta -0.00897 nats vs merged SOTA openai#1493. Novel: TMA fused MLP kernel, Tap-In unigram matching (min_match=1, fires 21% of positions), improved parallel residuals from openai#1529, parameter banking from openai#1523. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
resouer
added a commit
to resouer/parameter-golf
that referenced
this pull request
Apr 12, 2026
The current W2 frontier point is already close to the public best clean-ish line, so the highest-upside architectural import is the improved parallel residual writeback from the openai#1529/openai#1555 family. This patch ports the learned cross-lane lambda mixing into the existing split-lane decoder while keeping the pass-conditioned attention modulation and score-first doc-independent TTT stack intact. Constraint: Single-node budget means the next experiment needs real upside, not another tiny hyperparameter nudge Rejected: Tap-In min_match=1 import first | Higher upside on paper, but much riskier on bytes, runtime, and review surface than improved parallel residuals Confidence: medium Scope-risk: moderate Directive: If this lane regresses, treat improved parallel residuals as non-additive with the current W2 modulation stack rather than trying to rescue it with more tuning Tested: python3 -m py_compile train_gpt.py; lsp diagnostics reported no file-level errors Not-tested: GPU score, bytes, and runtime on the integrated lane
sunnypatneedi
pushed a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Apr 12, 2026
…1.01710 Merged SOTA changed from 1.1147 to 1.0810 (PR openai#1493, bigbag, 2026-04-09). Six PRs merged in 5 days (PRs openai#1334, openai#1285, openai#1394, openai#1412, openai#1413, openai#1477, openai#1493). New target: ≤1.0760 val_bpb. 18 days to deadline. Key findings: - GDN-Hybrid (PR openai#1564): 1.01710 BPB, no TTT/SLOT — monitor for organizer review - VarLen Attention + Doc-TTT (PR openai#1560): 1.07406 BPB — implement next - TMA Megakernel + Tap-In (PR openai#1555): 1.07636 BPB — add after openai#1560 - PR openai#731 n-gram (dense count + Laplace): reviewer says LOOKS CLEAN, awaiting 3rd seed - PR openai#758: major legality flags, do not implement Updated CLAUDE.md: Competition Strategy, Technique Reference, Lessons Learned (Session 9). Updated logs/daily_research.md: new 2026-04-12 entry prepended. https://claude.ai/code/session_011WyxjcwdigLhMFQDjLL5ss
resouer
added a commit
to resouer/parameter-golf
that referenced
this pull request
Apr 12, 2026
The current W2 frontier point already has a strong score/runtime tradeoff, so the next high-upside import should add eval-time capacity without bringing in extra source files or a broad new review surface. This patch ports the eval-hash embedding path from the openai#1555 family: a zero-init hash embedding attached only during evaluation/TTT, hashed on the previous/current token pair, and trained with a higher LR multiplier during the score-first LoRA TTT loop. Constraint: Single-node iteration favors compact eval-time additions over large architecture or C++ retrieval ports Rejected: Tap-In import first | Higher upside on paper, but much riskier on code size, review surface, and implementation complexity Confidence: medium Scope-risk: moderate Directive: If this lane improves score but blows runtime, tune the hash buckets or LR multiplier before combining it with any other eval-time mechanism Tested: python3 -m py_compile train_gpt.py Not-tested: GPU score, bytes, and eval runtime with eval-hash enabled
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
val_bpb = 1.07636 (3-seed mean, std 0.0006) | ~15.97 MB | 8xH100 SXM
Merged SOTA (PR #1493): 2.78932 nats. Delta: -0.00897 nats (clears 0.005 threshold by 80%).
Novel Contributions
Compliance (Track B — Issue #1017)
Test plan
Credits
@msisovic (improved parallel residuals #1529), @abaybektursun (Tap-In V4/V6 #1518/#1420, TTT #549), @clarkkev (SP8192 + SDClip #1394), @EthanYangTW (parameter banking #1523), @dexhunter (legal TTT #1413), @resouer (eval hash embedding #1460), @bigbag (QK-Gain tuning #1493)
🤖 Generated with Claude Code