Add non-record AR GPTQ XSA ROTQ Hadamard submission#1224
Open
vermissa0ss wants to merge 2 commits intoopenai:mainfrom
Open
Add non-record AR GPTQ XSA ROTQ Hadamard submission#1224vermissa0ss wants to merge 2 commits intoopenai:mainfrom
vermissa0ss wants to merge 2 commits intoopenai:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a non-record submission folder for a rotation-aware GPTQ variant on top of the public AR self-generated GPTQ +
XSA-all + BigramHash stack.
Folder added:
records/track_non_record_16mb/2026-04-01_ar_gptq_xsa_rotq_hadamard/What this submission is
This is a non-record submission.
It uses the same evaluation metric and artifact accounting as the main leaderboard, but it is not a main-track
record claim because the key run was trained on 1xH100 for 4800 seconds, not reproduced under the official
8xH100 SXM / 600 second budget.
Main idea
The new ingredient is modular rotation-aware GPTQ:
The best result here used per-layer Hadamard right-rotations on
mlp_upandmlp_downwith block-size search over{128, 256, 512}.Results
Best result in this folder:
val_bpb:1.11290586val_loss:1.8790899615,826,1481xH100,4800s, seed314Export-only ablations on the same checkpoint:
1.11296252mlp_downHadamard256:1.11291713mlp_downper-layer Hadamard:1.11290938mlp_up + mlp_downper-layer Hadamard:1.11290586So the best rotation-aware export improves the same-checkpoint export path by
-0.00005666BPB while staying underthe 16MB cap.
Notes
log sync completed.