Non-record: Polar STE QAT for structural weights#1154
Non-record: Polar STE QAT for structural weights#1154LucasErcolano wants to merge 8 commits intoopenai:mainfrom
Conversation
|
Extended local stress test on 1x RTX 3090 completed.
|
|
Full 8xH100 Hopper Validation (10-minute wallclock) We executed a full-length run on the official target hardware to validate the distributed scaling and wallclock budget of this Polar + QJL stack.
Conclusion: the infrastructure, synchronization, export/reload path, and end-to-end math are all proven on Hopper. The remaining problem is model quality, not systems stability: the current gap between teacher-forced and autoregressive BPB indicates the topology still needs stronger structural regularization to handle decode-time asymmetric quantization noise. |
|
Follow-up local KV-cache result on top of this PR: pushed This adds Local RTX 3090 probe on the recurrent
Short sweep over the recent exact-key window:
So the best local point so far is a very small exact-key suffix ( I have not re-run Hopper validation for
The current code path itself does not add any new distributed collectives; it only changes rank-local KV-cache layout and score computation. |
Community Review — Non-record: Polar STE QAT for structural weightsCompliance: NEEDS AUTHOR ACTION — What I found: The CPU smoke test on CT2038 (proteus-engine, 128 GB RAM, Triton 3.6.0, flash_attn stub, cutlass_evt_fusion stub) failed at the import step with: A few of the common patterns I've seen for this class of error in the 2026-04-11 sweep:
Recommendation: Could you run Once the parse/import issue is fixed, I'll re-run the compliance audit through the normal pipeline. No other flags identified yet because the audit halts at the import step. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): IMPORT_FAIL — ModuleNotFoundError: No module named 'triton_kv_ops'. Classification via |
Retraction — this IMPORT_FAIL was a bug in my smoke runnerSorry @LucasErcolano, this one's on me. I re-audited the IMPORT_FAIL I posted above and it was a false positive — the fault is in how my CPU smoke runner set up What happened: The runner imported your Verified at head On the real eval image (Python 3.10, Your PR is not broken by this error. I'm retracting the IMPORT_FAIL classification. I'll re-queue the full compliance audit (BPB check, n-gram / TTT / SLOT flags, etc.) on the current head and post findings separately. Again — sorry for the noise. These community reviews only work if I actually read what I'm reviewing, and I didn't in this case. |
Community Review — Non-record: Polar STE QAT for structural weightsBPB: 2.3861 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.35s, dim=512, layers=9, vocab=1024, code=130060 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.35s, dim=512, layers=9, vocab=1024, code=130060 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
Stacked on top of #1149 and the Triton backend follow-up PR.
This PR extends the same non-record submission with Polar STE QAT for structural weights and a matching polar export path:
QAT_SCHEME=polarforCastedLinearweight fake-quant during trainingWEIGHT_QUANT_SCHEME=polarfor large 2D structural tensors at export timepolar+zlibWORLD_SIZE>1runs can train under DDP without deadlocking at final evalLocal Validation
Short RTX 3090 smoke training (
ITERATIONS=8,TRAIN_SEQ_LEN=256,QAT_SCHEME=polar,WEIGHT_QUANT_SCHEME=polar):17.3560 -> 10.4964from steps2 -> 8val_bpb:5.5849qjl_tritonsmoke eval:70.65 tok/son 256 validation tokenspolar+zlibartifact size:14,510,929bytesExtended RTX 3090 local stress test (
ITERATIONS=640,TRAIN_SEQ_LEN=256,VAL_MAX_TOKENS=32768):val_bpb:4.0166 -> 2.3356 -> 2.1329 -> 2.0218 -> 1.9775 -> 1.9278 -> 1.8482 -> 1.8164 -> 1.8250 -> 1.7893 -> 1.77572.0at step256512recovered by the end of the runqjl_tritoneval on 1024 validation tokens:val_bpb=2.3861,73.81 tok/spolar+zlibartifact size after the long run:14,782,032bytesArtifact isolation / autonomy check (
artifact_isolation_check.py, fresh Python process):final_model.polar.ptzand generated a continuation from the promptLa cuantizacion polars, and the polars are the best pqjl_tritonsteady_state_allocated_growth=9216 bytes,peak_memory_allocated=88,040,960 bytes,peak_memory_reserved=106,954,752 bytesHopper validation:
MAX_WALLCLOCK_SECONDS=60) compiled and ran successfully withqjl_triton; no VRAM anomalies (peak memory allocated: 1987 MiB)qjloverqjl_tritonon this workload (115.51 tok/svs110.13 tok/son a 1024-token isolated profile)ENABLE_TORCH_COMPILE=1was functional but incurred ~200sof compile overhead, so the intended record-run configuration keepsENABLE_TORCH_COMPILE=0torchrun --nproc_per_node=8,MAX_WALLCLOCK_SECONDS=60,KV_QUANT_BACKEND=qjl) completed without deadlock; final rank-0-only autoregressive eval also completed successfully319train steps in60.176s, teacher-forcedval_bpb=1.8377, final autoregressiveqjlevalval_bpb=2.1360,92.55 tok/sAdditional sanity checks:
3.6e-4torch.inference_mode()could leave cached inference tensors behind and break later training stepsReview Notes
This branch is intentionally stacked. Until lower PRs merge, the review focus is the top commit sequence:
ba91df0Add Polar STE QAT and polar weight exportb393705Fix RoPE eval cache and add artifact isolation harnessed8f388Make final KV eval safe under distributed training