Skip to content

Record: L-BFGS Causal SLOT — val_bpb 1.0046 (3-seed mean)#5

Closed
resouer wants to merge 4 commits intomainfrom
submission/lbfgs-causal-slot
Closed

Record: L-BFGS Causal SLOT — val_bpb 1.0046 (3-seed mean)#5
resouer wants to merge 4 commits intomainfrom
submission/lbfgs-causal-slot

Conversation

@resouer
Copy link
Copy Markdown
Owner

@resouer resouer commented Apr 4, 2026

Summary

3-seed mean val_bpb: 1.0046 (std 0.0003) | ~15.8 MB | 8xH100 SXM | ~556s SLOT eval

Merged SOTA (PR openai#1019, 3-seed mean): 1.88218 nats. This run: 1.69620 nats. Delta: -0.186 nats. Clears the 0.005-nat threshold.

Results (3-seed)

Seed Sliding BPP + Causal SLOT BPP val_loss (nats) Artifact
1337 1.0925 1.0043 1.6957 15,803,625
42 1.0925 1.0048 1.6965 15,808,775
2025 1.0925 1.0047 1.6964 15,794,277
Mean 1.0925 1.0046 1.6962

Changes from Merged SOTA (PR openai#1019)

1. L-BFGS Causal SLOT in Logit Space (Novel)

Standard SLOT optimizes delta using loss from ALL positions including future ones — PR openai#1240 proved 100% causal violation. Our causal SLOT restricts optimization to already-scored context positions only. L-BFGS optimizer in logit space (max_iter=25, history=20, focal loss on last 128 tokens, warm-start, delta clamp +/-5). Delta: -0.087 BPP, ~556s eval.

Nearest PR: openai#1318 (L-BFGS logit SLOT, non-causal). Different: causal constraint on optimization — loss from context positions only.

2. Pre-quant AdamW TTT (6 epochs)

AdamW TTT on full-precision EMA weights before GPTQ. Delta: -0.022 BPP, 110s.

3. Coprime-stride multi-shard data loader

Weighted random shard sampling with coprime stride. Delta: -0.003 BPP.

4. Config batch (QK_GAIN=5.0, WARMDOWN=4000, GPTQ damp=0.005)

Delta: ~-0.003 BPP combined.

Compliance

Satisfies all four NoesisGenesis conditions (Issue openai#677):

  1. p_t depends only on artifact and prefix x_1...x_{t-1} — causal SLOT uses only already-scored positions
  2. Full softmax over 1024-token vocabulary
  3. Score-before-update — current tokens don't influence their own scores
  4. Single left-to-right sliding-window pass

Model weights never modified during eval. Only per-window throwaway delta (1024 floats) is optimized then discarded.

Reproduction

pip install flash_attn_3 --no-deps --find-links https://windreamer.github.io/flash-attention3-wheels/cu128_torch291/
torchrun --standalone --nproc_per_node=8 train_gpt.py

Credits

Base: PR openai#1019 (@abaybektursun). Pre-quant TTT: PR openai#1006. Coprime loader: PR openai#1184 (@icryo). L-BFGS SLOT concept: PR openai#1318. Causal SLOT: our PR openai#1306.

abaybektursun and others added 4 commits March 28, 2026 08:32
…11473 (3-seed mean)

AR self-generated calibration (no val/train data during quantization).
Recreated from PR openai#728 at @valerio-oai's request for clarity.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ptq-xsa-bigramhash3072

Record: AR Self-Gen GPTQ + XSA-all + BigramHash 3072×112 — val_bpb 1.11473 (3-seed mean)
3-seed mean 1.0046 (std 0.0003). Beats merged SOTA (1.1147) by 0.110.

Novel: L-BFGS causal SLOT — optimizer (L-BFGS), space (logit), and
constraint (causal, context-only positions). Passes flip test (PR openai#1240).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@resouer resouer force-pushed the submission/lbfgs-causal-slot branch from 46318b4 to 4ede046 Compare April 4, 2026 15:19
@resouer
Copy link
Copy Markdown
Owner Author

resouer commented Apr 4, 2026

Closing: resouer/main is out of sync with upstream. Branches are clean and ready — will create upstream PRs on openai/parameter-golf when approved.

@resouer resouer closed this Apr 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants