Non-record: QNA + SQWA compression thesis (8xH100 SXM)#975
Non-record: QNA + SQWA compression thesis (8xH100 SXM)#975Abhishek8108 wants to merge 1 commit intoopenai:mainfrom
Conversation
Controlled 3-run ablation testing whether Quantization Noise Annealing and Stochastic Quantized Weight Averaging improve post-quantization BPB. Results: QNA reduced quant gap by 7%, QNA+SQWA by 65%, but neither improved the final sliding-window metric (1.1216 baseline vs 1.1258 QNA+SQWA). The bottleneck in current SOTA is float model quality, not quantization error.
Community Review — Non-record: QNA + SQWA compression thesis (8xH100 SXM)BPB: 0.008 (cache parse — may be delta/std, not val_bpb; check PR title) | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1044 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.06s, dim=512, layers=11, vocab=1024, code=89824 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.06s, dim=512, layers=11, vocab=1024, code=89824 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
Non-record submission documenting a controlled 3-run ablation on 8xH100 SXM testing whether training for the quantized artifact directly improves post-quantization BPB.
Results
Interesting negative result: Both techniques work mechanistically — QNA reduced quant gap by 7%, SQWA by 65%. But neither improved the final leaderboard metric. The bottleneck in current SOTA is float model quality, not quantization error.
Key takeaway
With a baseline quant gap of only ~0.008 BPB, the existing late QAT already handles quantization well enough. Future improvements should target the float model directly rather than compression alignment.
Files
train_gpt.py— full script with QNA/SQWA (env-var toggleable, defaults off)run1_base.log,run2_qna.log,run3_qna_sqwa.log— complete training logsREADME.md— detailed write-up with implementation details and analysissubmission.json— metadata