diff --git a/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/README.md b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/README.md new file mode 100644 index 0000000000..510565fdc7 --- /dev/null +++ b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/README.md @@ -0,0 +1,97 @@ +# Record: SP8192 + 3-Layer Recurrence + Parallel Residuals + QK-Gain 5.25 + Legal TTT + +**val_bpb = 1.0810** (3-seed mean, std 0.0002) | **~15.99 MB** | 8xH100 SXM + +## 3-Seed Results + +| Seed | Sliding BPP | **TTT BPP** | Artifact | +|------|-------------|-------------|----------| +| 42 | 1.0829 | **1.0808** | 15,991,930 | +| 314 | 1.0827 | **1.0810** | 15,992,919 | +| 999 | 1.0826 | **1.0812** | 15,993,232 | +| **Mean** | **1.0827** | **1.0810** | **15,992,694** | +| **Std** | **0.0002** | **0.0002** | | + +Merged SOTA (PR #1019): **1.1147 BPP**. Delta: **-0.0337 BPP**. Clears the 0.005-nat threshold. + +## Key Techniques + +1. **SP8192 + GPTQ SDClip** — int6 matrices (k=12.85), int8 embeddings (k=20.0), zero selective pruning (PR #1394 @clarkkev) +2. **3-Layer Depth Recurrence** (layers 3,4,5, activate at frac=0.35) — 17 virtual layers from 11 physical (PR #1331 @dexhunter, PR #1437 @dexhunter) +3. **Parallel Residuals** (layers 7+) — GPT-J style, attention and MLP read from same input (PR #1412 @Robby955, PR #1204 @msisovic) +4. **QK-Gain 5.25** — learnable per-head query scaling, monotonic improvement from 4.0 to 5.25 +5. **Legal Score-First TTT** — SGD (lr=0.005, momentum=0.9), 3 epochs per 32K-token chunk, cosine LR decay. Score-before-update ordering. (PR #549 @abaybektursun, PR #1413 @dexhunter) +6. **Tuned Hyperparameters** — WD=0.095, MLR=0.022, EMA=0.9965, warmdown=0.72 (PR #1445 @X-Abhishek-X) +7. **LZMA code wrapper** — ~16.6KB code, saves ~43KB vs uncompressed + +## Architecture + +11L x 512d x 8H / 4KV, MLP 4x, LeakyReLU(0.5)^2, Partial RoPE (16/64 dims), layerwise LN scale, tied embeddings, logit softcap=30.0. Depth recurrence: encoder [0,1,2,3,4,5,3,4] decoder [5,3,4,5,6,7,8,9,10] (loops layers 3-5, activated at step ~2016). Parallel residuals from layer 7: attention and MLP operate on same pre-residual input. Skip gates (sigmoid-gated U-Net connections). + +## Training + +MuonEq-R optimizer (row-normalized Muon, Newton-Schulz 5 steps), AdamW for embeddings/scalars. 4550 steps in 588s on 8xH100 SXM. Linear warmdown to LR=0 over final 72% of training. EMA decay 0.9965. + +## Quantization + +Full-Hessian GPTQ with SDClip: `clip = k * std(row)` for principled rate-distortion. int6 for attention/MLP matrices, int8 for token embeddings. Byte-shuffle + Brotli-11 compression. Zero selective pruning needed -- model fits natively under 16MB. + +## TTT (Test-Time Training) + +Score-first, chunk-based SGD adaptation at eval time: +- Chunk val tokens into 32K-token chunks +- For each chunk: (1) score all sliding windows under `torch.no_grad()`, (2) train model on scored chunk tokens with SGD +- 3 epochs per chunk, cosine LR decay across chunks +- Gradient clipping at 1.0, distributed all-reduce for multi-GPU +- Total TTT eval time: ~370s (within 600s eval budget) + +## Compliance + +Per Issue #1017 (Track B -- legal eval-time adaptation): + +- **Condition 1 (Causality):** Sliding-window eval is strictly causal. Each position scored from prefix tokens only. +- **Condition 2 (Normalized distribution):** Standard softmax over full vocab. No n-gram cache, no logit biasing. +- **Condition 3 (Score before update):** Each chunk fully scored under `torch.no_grad()` BEFORE any SGD update. Training only on already-scored tokens. +- **Condition 4 (Single pass):** Each token scored exactly once. No rescoring, no multi-pass selection. + +Additional: +- No SLOT (standard or causal) +- No pre-quant TTT on val data (model quantized once during training, TTT adapts at eval time) +- No ETLB (eval-time logit bias) +- No n-gram cache or tilt +- All artifacts under 16,000,000 bytes on all 3 seeds +- Training under 600s on all 3 seeds (~588s actual) +- Eval (sliding + TTT) under 600s on all 3 seeds (~500s actual) + +## Reproduction + +```bash +pip install brotli sentencepiece +pip install flash_attn_3 --no-deps --find-links https://windreamer.github.io/flash-attention3-wheels/cu128_torch291/ +MATCHED_FINEWEB_REPO_ID=kevclark/parameter-golf python3 data/cached_challenge_fineweb.py --variant sp8192 + +SEED=42 QK_GAIN_INIT=5.25 TTT_ENABLED=1 TTT_LR=0.005 TTT_EPOCHS=3 \ + torchrun --standalone --nproc_per_node=8 train_gpt.py +``` + +## Credits + +- **@clarkkev** — SP8192 + GPTQ Embeddings + SDClip + MuonEq-R + depth recurrence (PR #1394) +- **@dexhunter** — 3-layer depth recurrence (PR #1331, #1437), legal TTT on SP8192 (PR #1413) +- **@abaybektursun** — Score-first TTT framework (PR #549, merged precedent) +- **@Robby955** — Parallel residuals on SP8192 (PR #1412) +- **@msisovic** — Parallel residuals concept (PR #1204) +- **@X-Abhishek-X** — Hyperparameter tuning: WD=0.095, MLR=0.022, EMA=0.9965 (PR #1445, #1471) + +## Acknowledgements + +Thanks to OpenAI's Advanced Competitor grant ($500 compute credit via RunPod) -- this was instrumental in running the 160+ experiments across Steps 1-22 that led to this result. + +## Included Files + +- `README.md` (this file) +- `submission.json` +- `train_gpt.py` +- `train_seed42.log` +- `train_seed314.log` +- `train_seed999.log` diff --git a/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/submission.json b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/submission.json new file mode 100644 index 0000000000..697ddd665e --- /dev/null +++ b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/submission.json @@ -0,0 +1,36 @@ +{ + "author": "bigbag", + "github_id": "bigbag", + "name": "SP8192 + 3-Layer Recurrence + Parallel Residuals + QK-Gain 5.25 + Legal Score-First TTT", + "date": "2026-04-09", + "track": "10min_16mb", + "val_bpb": 1.08100, + "val_bpb_std": 0.00020, + "seeds": [42, 314, 999], + "seed_results": { + "42": {"val_bpb": 1.08079, "artifact_bytes": 15991930}, + "314": {"val_bpb": 1.08103, "artifact_bytes": 15992919}, + "999": {"val_bpb": 1.08118, "artifact_bytes": 15993232} + }, + "hardware": "8xH100 80GB SXM", + "pytorch_version": "2.9.1+cu128", + "technique_summary": "SP8192 + 3-Layer Depth Recurrence (L3-5) + Parallel Residuals (L7+) + QK-Gain 5.25 + EMA 0.9965 + WD 0.095 + Score-First TTT (SGD 3ep) + GPTQ SDClip + Brotli", + "compliance": { + "train_under_600s": true, + "artifact_under_16mb": true, + "eval_under_600s": true, + "no_slot": true, + "no_pre_quant_ttt": true, + "no_etlb": true, + "no_ngram_cache": true, + "score_first_ttt": true, + "three_seeds": true + }, + "attribution": { + "sp8192_gptq_sdclip": "@clarkkev (PR #1394)", + "depth_recurrence": "@dexhunter (PR #1331, #1437)", + "parallel_residuals": "@Robby955 (PR #1412), @msisovic (PR #1204)", + "legal_ttt_framework": "@abaybektursun (PR #549), @dexhunter (PR #1413)", + "hyperparameter_tuning": "@X-Abhishek-X (PR #1445)" + } +} diff --git a/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_gpt.py b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_gpt.py new file mode 100644 index 0000000000..bc965bee09 --- /dev/null +++ b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_gpt.py @@ -0,0 +1,2 @@ +import lzma as L,base64 as B +exec(L.decompress(B.b85decode(";JwB(bzJ~7n@VT6Qap3bt~@<3h>ok~)Km^%c^ys%R{D_%yAk9-_tV7^coUOo3$w>`(`ci)t`2F7>r>Ltx>>S2CRw|7ov>Wn1e~_!RLQ=%V9g?)G3yPsu%SBy!lj1PaC-x%dDmCDOZ^r^!)+WWz}ejKXTJ#^U6Ra!};QocHHXQC+4UM!QQ!-N5Xd|%~a(9)bTYIO+>B~8~@lqmri%^qEkQUy074Rh6w7V_#^s9J-3BNA`G;qyR$LYcI?e+loZVWi~B$n=TKFp{%SeHYp{oNWh;U@Ahk8M2$OU%K8B$lb*dRQXd-GR_@*KAZdRdwSd#v=LSq1v@Puul=a7WXDmh1^kBj}Y2XlER!D2E{&{%lV(hz$#n5%+%sk&Q}>{y0xpRgiQQBJeVV0hy8UD3ntyo@(Pv+K7^zVRDt4bah(r8kfsZThb+H1)~K-lIr4`|V#-2R>G7pP*N!fwWd&Dq8C)y=NrG_U_Oz6Q?+@ok1?(VJ5?ZT~&}C4Ks38WRB>3i=I!}H-8qq=&yKJ;tbpwwn~lAseD^q1C*u5T;lKQtF;?zv@u0f36%6SXU~txi3v5iSPK*`fNE9531KaQDL`zTPF$MX4U(-3sY-&?>QJe)giBQzpor7H)AZ#4=Hn#`AoAL7tT){&bw(fgz|eQRt`#6-<>;m*+&$!nf|od6&lVKYYHuOoNgZU_L>E@!O%__mlt=);Hwdc43+CM?sh5y+my3XSVYMO8F1pXuq$fvTU<$mpDjr>Lm){DeV)>4AKAhA?jxjH<-3yYQ#5qz+4c`Utifny+Ydmr4?c_z60#9@FU+U1&O$Lfg$WrX7gCj50O1t`1A`k04LVr;^*~{|@(TS5>#TAjL(B`umc8bVA$bS|F?^2A7E}z7IIgZlY(8Ex#K+nLh0vzlKK=74U!g+sX4T?e3_^_7XB1A(HB{pYd{vHYcak_P3DZ2LAB20wAP+C_9p7R|0}wA=p~JFi&xD8H}n(LxCc5rcmwF`!s(tSf_l_TXk(cPZJ_z`)iV4#r^gzawYQ%HE1iaUF=(KAcKXE%%6Hx0i;?;p1w#dN7!-y!(2GUw()t4|BXt%+05bu$yea+{f!deuk%(g-o}&XEEWm!lO+1^On#i#4rhP{bDYb9ZnbGd5n{*P->hZxI<{=3c-92I#g*mTey8O>cuw%hdwjB!#=GH_0?hY|Lf@L$0Qp+k03PROh)o-cMOQ!b>qfvPNJLTVvCBX24BGgI=_|35Bd%&Vq&!LWECF4!1&J?@uRfe=N2mi{l-S0aW101I*cY_A&2R~zfij0IZST;@;xJYti>{)weL@A2ZOGrq(U-ibnWz0BL;s!=S;`!K6@M501z-dU(OqY0?!!!Tk6Z{9!iH*tDZsjYG`J8UEqJz~7cNEPh1A#iz_$2*Nxo;S{UehRA^{Mf3GV^9hBEKSL(iD!=aKpjYCTqmhYA|h4zASL-v?9UWx8tzm#N5eHo2h3w*`(kHM2e}vDviz3$~a(Y#jlJL?*}m9&(Oqs1+CNUy7g~Z#lRN#>g9{7u~tot;|0qTu4G4xwdXk3eT+l1l$)Vq%}j^^1b(jIvF|OcNb1Jz&)>b)qGiC5P7yS}AvC}VDKORD$#^Ydjg!zDuM#$J+k<}O|o#9dvGrp)*yShv3->joMiF%~orV4^0cl9F!@VqwD`5fjekV@3E`STlX!=JDxbOQiv?Jp)$Xy|Z>g)@q_QYKopPeu&ghhPNw0y&{j?$GHwDQoztHvVU)a0ca6}7{#3^KK1uTcBkMSF$IDQp#Nhy>JTHPK2w+%N#FZ(D)=sw?BkfduBK{Owa(SkBq^_S*|NP@DI(VEWNqETjYsZ@bch5-dlWjP>|)xv+AknhsqQ!j3!=TT%CYvl>o#XU4AApoVBJ;db=W0m0#FH7by8-Z*V~$!QptJuPqLkH-bA#`L?*g#-60qO9x7)rWh{~YY77{NX_v_!Mc-#`(n{>OHy|HxotTLyFAdqCe^bQsveKyNdxf%^ECJDw1jQaV{3doP1nC-IuYoJBS)BjwI+@*WRZpBQq--|WAHx2WWVue@lE`*T9AY1=3wKyIT}9Ss;d##=nZ>!%19!lx_0W91se3dXzq5oIE}=)Lf4xkby*McKg=z&Qh>gK5l~kV7t^lQK_s8TXQqiwiAz+UT7qOmRWvI~~2yt~5_%swZRW#&SB~-uWRFRPsi5WZAzJp1&o@T%9?d1EUEpyP1v5zSN9`Nzg4<9J>%D?ZP~-T(dwGEKqZMPuhgN<|KigW>x{H0%t_&ya;8v^0F=-)sK*R85LA|5>|ZYqA#XmVbFc92&H9WjeO5C!FX%LsiXg6}0#l(Pg(=6hjd7H_7$6NLIA+rCa;GE_3R75D#&J(7=>z|LN$87?M}UpavJo5jeYlJy>3UxeQ{duojamIjZmWv|*!Tjr5rT-K7C#w~_vZ!oIz>O;(D%nYcBK)`IjO`=SDZo*4vJ4V2bFcFg(2@0lt|4uUCPbO&N6^dv4y}sPBwT(0$|M|*?y;Jv@#8^JCr!hxD0=c#R8ALJkOUZ5;?_TS5GI^kyB>q;{eo<-=N|;JU?~G80$0+y}Bn>nRaoX5bq_lK8&2G2D0K(N6U_xX}HirikYywzHoCpo)+j^d}t`9sXluV$o6?ewHe5Ui+m5Y9oyhGHXI2OTu~#~ow24E&_|NZmvkjEEo{?lrj>I+3}kwNN$<|WFHD7&hT*J`96C?gGpoC>Df4aU8P&s$90m&Ugy{5AY@?hVDTc{$QAjsoHqS{ck6snhl_)o^474tl{Idqaq1M4gTm}}RWDGq`oxuuW;}<=ge*lXX+3Fk7GOExz3(~A%nd`>nKrLxi-U((%)yeb9`|*{}XN04z>c7Ok#4FmP|M+baT*Fuu4_Vg~%D*h%0S&xmIOVhF)nXWYKj;F_kSpCA=E?|IY3TP731i4AmJ9{uIj}nvkZA~PCqrx=X%FiI(pX*UhQoq9-g$`cDVZp7ZcaGt$}M)fe&uhR9-a*yxklx#6Au8ICI}z@KWrk3Obx%(^tG4C1D@?bgq2jBnZ(O?j&jyR!8J6j%~%zfEtIAPDKu$5hd4V~`Wco06Vlpvp}HuyKK}fz6t3mD@K4unNqV2DlC&6KFhx<%Ai%h1TodO4gAJ51=G}q7(kO9N(i4X$8MWnd^UE!-v`^1{5*9tbIpjgUqjwfygmDW~97EWp8~n;>1xM9pw3FxHUd_tgGkMlT)SNNWx(w9cg>9zUZzX{Q^w*d#U194JF-wwlAz0yZ)tH+4r_JXO{=3ej6LQd~!naPgtZ;`)wmc&a4dppCSVTiQcc8UfugJge8Gerop`ck3K6;T*^kZp<9Hf-jlYfu+Ncv+`5i&JOwCg|NE;{ox`VY;Ub%E~4C%G@CLJi@0X_co$N5u2iD|I6taml4pwulJ)p!O69folAr>W$wc_@9v9VvU6N+O#a=gX;BimpVU*u)q2XscX=lx^Zn-BN-;_}UkjSaA)uW&+!Y^2T_W8ydgx(`~YE%eEmgcn-wY0id8}v=bSq#xRdSY89OVSts;`59NKaqnBJzq!px7xcxQkwC!%nzq;ACIq57xa9CNC~WPR=_N|sDZ*mo*d={3!J1&)iMAr6Ji^XPO(S$Grm|U4{YYnUtbS1j`5tgw6*^XZGHcn{g#{1)Vn2E4|r#ek2MWQ9#>rzQ3zDgm9txa-#PSX0d6dM`yebz%Twh(oT!pmA+0Mo^hZ(f*S=n@m;h!I4LZqoenOsP%;j#RNdY2pvZFx4)uHo@w&tRGaMg#!x08$i%Dckl;@(Bn`nq>l-xa+7FYu&~4Yq@Vdt+c}5e4r$M9h_8xqG>aHp&?q8)HWa0Sz{jdpt`RF2TQ#ak`F0Ku`0Z2b&|K}-j9!Ag!ByLopEz?X#Iz-hH-XI{L8;k+U1StEfZI9`UEgh8v|1fNGF!KlPDs)l0lp4YCF?m_QdcJRn{uyNTlYhoL&(q5rP2Z9SU>2tYy;UDp^wa%Fg~Hhk(2S`|te$Gx+Ii0m3i){{7ug{x+aM|ib=hj3X>-y&H^d1>Yk&CE~y^DBjF0EgaKtEaKI7Wky8h&Y@UJ7P7{NwNv$D(&at=BZzc4W-VudsUAV!}qh{Bx~xqASW>QiwS&Pp?nj~MhW!&p&ZK!3XYPt|0r=y$NiXqT5r|a3t~awh`ELbbKti#lGw4_Z}(|4CO9mS3kjLy9es(hZKys_D+DbXrMuaf)dQf+fd=>LNl`QmMe|5HAZu3&M|Qc$e&*iiiM1kpmZL3f%xon_T*7vFZe9~Mf+S+JioV)e9`9pu;^&wln&a4*Ffme0YR3eyDzaPl4eJOEcxo}+_<$hvv>BL)>?w7w?zq|h-nRo`f!5Z^Z0EYf3QZs73GD`>p#=LV@%sLW3~V7UBob6G;zP|6$TYwVbdcqN#-p-y330KXa?YLk+7|%B|s655R+FA)bcA;K2bp!effB5mS5M{$`4RUF)D-}S2zDLqE^vwi-W%}HG7E`-9#iZY98C2*wag^L8FGhTg>4~1t<927~iA%U4&gdRuPFnqSz#Pp^sW|GtL`42h!3vJLPs{Yk+FgK!+g~a3qqe(;LMP&qfS#hT%mKhNhcUPsWZ`K`DcL?sHamF=Hhu*!?d1u{s#h*t!VqsV3y_mGfSdb^JScFk8jUdO?g(o(2gC%_^uh1$<-K*o~`R)Y+;0K`JnLDY6s#;ZWB1vtnry0W&135=PkgRY7sTp#l@E7$0*w4Kr<2ke+_dq6&O6JW$*9x7%6WHy4`^yxw#Al;YVTb3CPy~Z;@}r_c~jG3g)%+iPOR2*zg^<$yYfn!U2MNM&RjuDjCBg+-)-Z5SNK$@YUY3LX2>lQFN54`UZRmO7GTUTn(e0S)DjFC0qZn#`W~1Tlo=z>)mCXdAK+UBFc44FEMl2e8`+5f&30d-K)8VmW}JZU`gXNq<|eO?7R^m97LpKwJUX;MC3lNB6M=aQZ4k|KAi4}mlh$J>ZI31_j^-*scF&{J^lIKauwG>6f7cviw1wPmS*>ozfDBu}n&hyDs>b^#XsvZ6btR8gNpO*h-+X7gLNr4n;Gb4JJKwCEXrC|9KHY`Ml?jmzn}65x`!Z9+@iql=}{4Sy>*h6-b&m)4X&g069dds355-{$xntPJ3I?LHZW}y2gLcFlfsp-|cUJQ&8%}s)J*SWRvBXz8D#>Iq4gVhCIUE!HeV@S_J2Dur1!X`9pd4?`0|?>oic&j~O{wG^t#stUrKuH6`vf2HSEUw6s0YDS2H)1kTD&9r^(5Qv(H~=!>`wBE$nA1xS@CN9|rYBQ*A$Qq0u0CJW$8-(-8+kascp$ekk)Q&nC3GRn=<2>E=P{Pyu$WKo(P}w#Ev_QQhD`4O6wVp{Mlj>5E?0R|`+6E(*x4_XsAG7WwH{+OyB?=ghZ@+HKNED%{1R->XEtpR39%RfO}y&~Gd4Za-_fyku%XTl!YIn_7xmt;p>Sa2&&w%UZM^kj0SYG(|W);}iWTOn3K+J$i-2zRX8sXx*y6dS@(}ufe>$Uz?NfWN(#!ifPmpZgXdw_7O>%j_22is^1IeqCfLH9t{@P8g87^ivbb@wKFj2^e}tN(KeUrP8j9I;T&Htn`L1$uT#j*m8QV%*=6l2#CLn_{d&x*L(SHHrfk(@zl+sA`8%&J6h%+LKk409)E83-HAcGBHdA=gP4(Ejt2;;%*Bsrr~L4C69Vu~evF{x7H+zM)QqA?I)_vIV*?PhK-EXVsG%&TJrwZL-kU5)Hb_A5!MFXh=aX|#Ch|gVNS*LWR|DtrBT=n03RP}e@3qiS>LeIKWv`n4>Lp5A3A|oZDQN8f4YLl0|Xo+)2kyWF_3tQB>g?YL69?$9rVe0*e|h_mYIQtQ@Nr`+n#Jpupgn(3qhzsS+S=bniZdRV%q>WR6We6Z3y_O>GitqnkrZ%n&Zv`5}ANvM`Z%Wqsdb4CiWJ80w_@d$0o*uEutYe_>#QH;E=Jt#$%`22Dm6la?)Q!AYPAJI6^-jd3}0vG`y4~t!iWU@C@WY_)6t{}K*Tw^!mrdMz0C-Qs+6KnYkgaQIpQinp4ybr13`soBJxjD*zfjwFjN(90&1Tb)%ssq%NDXFd&wyWQ!ff_Jx#i9g6mpp2UoHbF=#F!I9%(cTajqvE=9^5TEdMjoe-p4S?;>3urimS1<{3)vL2o2sD`WVjTtZk;B%azGTCVbpq(K=M=zpQ2pfR2VOOF8O`B=fvWO}LPJI2)YXhduxYeS58MA|e`z1l_C4af?0DM=hV&77&eK4hLNa!C-+*9el4sHKSW8mx6QI&P4^TCNr;na8jouqxDI_gg9+&PaR6uuy0kX+y>Z7ol}UA_$}a1hajh`6(Lkz!sHo%rRy9&bQqR!F`AUNI<&@w4=g#)AdbNWa=y;Shlf&Ub}?bii#;w*e=GGtsJ$(anmWG46}d)%8UN#04~LZ1@1AC7e#wB5oK5bSm=;v@l46vYr5<-t(d`rE2T5ild@WFcEH{-e)BkFo8cV$!`Dv!rxp)qCch+I8Y~4e#Ud#I|M}LLJgSM0NeUv}8FvrwtL4$X_O$wJybTIUyp7I3?tEiMkR|_U9UqDPU!YQF}Wsg^5%>ArjK{@-hYQ$VTD!_k~Q%3%EVJv;~S0Z1;=)mif+b?l>WC)YA$1vN+r3@>41_3`qR+DPKr=inwgPDlKcF?xsRl9Ty11f7|qm_%Q?BvSXuP(`vARP;^0jRy#I8_4-31Z8L?R7OqkHB9O5?y~}>$CFqEY*Sm+0``e?j6>#Ig-Gcg(uCnAPxl|RBOi<((`X1ZmMd(Hj^>|+AbcGqBzm&?Lu8ZX-L`JCdYK1o5{X|71cL!hX`7hf6X+G-P@iAE_x!+)y;mr=Ud_lc+MZ(T_%(c*IFfTP!+%RR6<2UAzCXCg^}^HN5qjG=qSib-uhE}oenz$u$>B^#;*a|$f8f*uqy365bw<|;d6a5AbU4gj&yu>)j4gF0C*(RgpD;Ynta?d+2uah7SDCB8$a~%b>0p*&W3)^lbt^S9`mu?)O<@xZ`0mn-ti3eCMq}Qs$=gQ^9Q0&BYYQ`wb;g(YP`9+d-mf5HBMKe%Y<6B^kz`PszHSL;thiEkqhAuq^hzqQT*@5`D=i+)kljv7m)5{KuyofVvbQ9S2fzy;%5zK+^go_Ei=;OY3p-!gL#o`5J0E>?1vZW9U=B~)+@08TbT7G*^cnCtd6+?$9((s?JH&L_Lyt!KLLYZmv}Wwp0WG*OlGoo8a9%Tr0ftKTK1Z&Y+*lV{~iQ*9Xe^OASRTtJgT)r;2m=%$l@G#wj?d2`N-|-#3_X;p_E5&>WvfnOO&s$zUT1Qe%#Ax)C9;)2uPXNF^P^9hp(vWD_73O{(@Caj_@mH1Gj>8EFvkF&EO-xiipj^p}c>;Go#yf3VRwYiX$^%$+R)owts?Hw6g>D(^-17Kgwh4m?~iK-r1!vfkJVq^J`L4|NbB!VIg&oix^}w|6<|>xvKMphlnx*YpoWHojB{=GKz?(&WrzR24)Ef{W~D5w!Ers${^fPo*5dUjGKqst|=SeIhxoKqCAVVNT5MnsNPsEL4AdpNN3D`&-?`JUvmj#?gTXJXc<>d&yiuY|C+O@kq2cJI!cb$dy*;0nkEwKbaQUhZk;h9c>Rzq_g^mh=JL234tKpzZ4OgW>FEpvo<;zkpo7lm(ZiW$WRWhRm1)s=#L{IK-3qh2-imsiW8T?S+y`b=uvIkJ;o8D(`cs7sJHa#+x)NgmM_%!a9H7SZ&imLw%)!5eLlR;Xzf7z>icyIA@2S9x`v(varTLZu9kDupp@Lgud{n+O789ynsG>m4!r^Ku6oV5DawC0L%L_xOtgHU|DJQAr@HOkUkSQ9eBbHQ*#s@VnU_!U%u^2!s5sXvKnQQhElE=iMW~jq!=VA&V<%rs>p<)X~MCKH-wD<7w)hANC8zJ#J;Non%*)jfJJWHp08#j`E_UgK-w5SzlSbptP)BQzcR)-*2d?kZ_WA$f2rKt7jZN80sc&09EAk8=oYdyr}TAAMu+E~P=Lf1!SD@T&T{p!mLlPO!JSSOlIG&8uXbsN~rv*JuOpD|_Qd2GWt$*3^hxq~0s7AwHx7o|a(0i;}+;9@eeirC0hGg=-$(*B3Hwx^ph#UJ5cJmFw0BM2gpD|v5eH7xQla3uWVWIN;R(-*fWG?cvYVrNs>iJn3ky4S_J_T{-)P;H!56U7PyPK23k88;@307qSNT|?=0^u=!y0uNOtx{;&7%lR-Mk+-olpQZXhW_o(wE=RKwgLUBKVumuuKW3>L?*1W@;g4eDr5@@vg=7Qg>t-zon|i)RZC*@@ycVvPzwD|}YC#yfq-}-)R-?!cyM9RW+3$O%opq0g^+}6G>nbEB2yp_lzC$cW@?b8%ST0d<g`L7998nQH6W_QbtPlwEb4ZIv@JA=&f3&3!H4rqg1r;G{#R6a+}<8V%f92?6^b(CfY)U^u<>e5r!F25t&EXeru>UzJha5BN1(V7^)`NG>j@d_{v~hQj;2R7gxgxInUj9rtI7l}`YqGHVe@I;Onw2-)}l0G@}i@vARM$U+zE@eJeTkonEAk9G)_6J}e0VD6bTB*!Ox&$>~zrYh#b8W;+|xZsr97Z;->_zujTjbKj0fjLhtA>iV112RswXbHv`UbM!7P}T|*jNkow6Nk2QCxCoS)7JMJWMvcUmbD?V!ZZo>uo84(Kof;eF+Qd=|dwA03p4+?(dRSZ&RD(>1XsqA{VaVjlAyF)a30Cb<%FDz?+|Ixx>i4h%wpXbF8RW5hi_>7ue%4SQHY7(fHzGB(?F3?`X{`a^p}j9NVQ;9?NGJxsO!Z?H8yDPxn%!@v4T$>PbP0y>I_wQQn24%lJTkKp6H+vJV2)M(dWf|iC+LErDy6C-de=uB&MoUjeL2XO0L4bZ`4vP3uVbytPNN6NDg3zLUHYX0^Imij2Fi`d4?$(7ZeUBYF)3&s@2KjOP#}RQZe4)MIFxaVQFK{L>46-9O=&!>KAT8xgqRcWNyl5AMK}a-gotG4Yw%ipxT>>fwY>M{_0HI24D0&89=IB>v=yT$EouK#X%GY1)R#2^jpx;T>jArIc}U_o!D=Lc5JG%h>x*#2(S_Nj|z+&GkRdR0=1{xCpa6q?iLhBx_p_rNll4IDEnS?txCSXARF_VWH&Stv~W#|GA|7hz==#>cl_M9Pf5P;ENrVnk{>6|VxH^Io)I@t4R;#|Q~F9wC!Njq=*t#XiqxluDz4oXZxJ7>d&jC4rMo9O(V^~NiI2C&WG*^$v2SJrm1&q`^f^8f-FtExv+~yOP%ztp7gVF#Y<1vIQ>Weq(Cu`8C7!_pfd^?*6~p3Sqh2Y;F3m^0T~_#91U*km55Yl1i5WS*j80m_v97fn4_021v!PoUb#Q4KyZ1F>W5)DsPJJGNI%F*O`r#kDje4y{AK2wiQ#&n2ZgoDQI({BL_tk(4UhEsjLJoy!BgCpjdkRgNUm^RGFx1)24FU8IipRaK0R22R+3g?hqUF6X(1e@kDQK{Kg#T#^=@mpxgviq*rYRp{+rsoPWV{vu4L)BrxS9R3Gwfi0xw_5dpRSTZKPb5_W`ZiSbHD*cGsY#00LLUDiQjkNdlpcsN|4PnStEUy)CGE~0gJbosWzUsfz~4cWVtIU}aR<=o`m*cj(CRpK6&tP?YOs1@_Iu5=vkpT&*jHk+w%={xIz7yZ`hUr&%HidU^Z1hV0S0nJ(qhgbS6sGduZ%(hy#Qkvl9wn*dLUEnjkRO=LcHBf@=Op*+^t%XWBkjy}LT8UgdT;{OZE^$e`0{itTiLxmgkjvhT8;Q;H+=u&r;uoa&-z*q8rF#mZjv1l+Ia{Y$sXMYuUbG;OHUJRLQVo1RnQw4T7`fQNp!8j4kyw>GXZKQ7K_@Je3UvVCvf%Pm#7jo0GSefG2kZE%1GmGHy0xJ96ine<;RH52z6dtU4TI0TajFmTtVOnwM9edIowW{vpoKit$&0;$HEwx4Jh>D^=c!z#iYzDy9KxT*9EP#s?=nbEc(tZRElIQUh!#K-LLI7Zu4U=pMG-T6_dkx>e6%G?(S5)>$LTdldWtd9ukX%KmG7Za&f&rjka&__V@X;pJld?TFyTfux3Bd82v_oTi@h%?{s06D!U&-R6V__BBu>$4+Tb*Q(Hu+yjnkAjU7B8F$%l5Bs#6o&%*tDRN5jGE_FgMn%F6pysWzvxmW(W-Ag67LA0+LNQwVp#iJFkwFpjXxf>wnGh+Wvjpgf?#%YQ?c=!g`%m(30L8=Y@GiP9ImTjUXkmmw7og|VY#2Kh_lUpQR#h@r5RxQ^>02z^UDEh>!3I~wWNJ%mqmX#j@w-iU4$iv4Iyr}t`yb69?Iq7TEH_duNVi1Z&6SN+q!mV3fcRKeLpLnuIjmx#Hj2MaHzd598UtC~}jw04N)JUc?Gg@yx{N_ma@9z8=PviXXTAdst0azc!s*{EI{Su%6yK#;93;L1`#OQG>n9XZr{+Z>yr`d6<}AvejxjLF{%iY~J5Ig30PcW07&yR?9M(*l#uLM&dIEwnZQfqlZ>6PH)AMdo@Ymn!F)kbVOe*tkmNaHmcn{xt307BAzpI;1{6+O}qhC0#`-p0Rm{yRX|j*`T+O0iQ_#I9Cc@dF1z6AIBvk8luW&SH6z*FwgHWe?tv5e3mn=md;e9DKE6HkR^Jz9;kJ_00"),format=L.FORMAT_RAW,filters=[{"id":L.FILTER_LZMA2}])) diff --git a/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed314.log b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed314.log new file mode 100644 index 0000000000..0354154a9e --- /dev/null +++ b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed314.log @@ -0,0 +1,148 @@ +W0409 06:13:15.360000 211 torch/distributed/run.py:803] +W0409 06:13:15.360000 211 torch/distributed/run.py:803] ***************************************** +W0409 06:13:15.360000 211 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0409 06:13:15.360000 211 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 8 + embed_clip_sigmas: 20.0 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + enable_looping_at: 0.35 + etlb_clip: 3.0 + etlb_enabled: False + etlb_lr: 0.05 + etlb_steps: 5 + eval_seq_len: 2048 + eval_stride: 64 + gptq_calibration_batches: 64 + gptq_reserve_seconds: 12.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/cee7df56-b5ec-4e9e-862a-1e689d7d40a3.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.022 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_residual_start: 7 + qk_gain_init: 5.25 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: cee7df56-b5ec-4e9e-862a-1e689d7d40a3 + scalar_lr: 0.02 + seed: 314 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + ttt_chunk_tokens: 32768 + ttt_enabled: True + ttt_epochs: 3 + ttt_lr: 0.005 + ttt_momentum: 0.9 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 4000 + vocab_size: 8192 + warmdown_frac: 0.72 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 128 +val_tokens: 40540160 +model_params:35944536 +gptq:reserving 12s, effective=588000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0096 val_bpb: 3.4879 +1/20000 train_loss: 9.0111 train_time: 0.0m tok/s: 8311737 +2/20000 train_loss: 12.3909 train_time: 0.0m tok/s: 8044851 +3/20000 train_loss: 11.1793 train_time: 0.0m tok/s: 7783151 +4/20000 train_loss: 9.4676 train_time: 0.0m tok/s: 7732451 +5/20000 train_loss: 8.3600 train_time: 0.0m tok/s: 7705113 +500/20000 train_loss: 3.3445 train_time: 0.9m tok/s: 7700474 +1000/20000 train_loss: 3.1974 train_time: 1.7m tok/s: 7703674 +1500/20000 train_loss: 3.1045 train_time: 2.6m tok/s: 7706014 +2000/20000 train_loss: 3.0699 train_time: 3.4m tok/s: 7707641 +layer_loop:enabled step:2017 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 3.0667 train_time: 4.7m tok/s: 7037181 +3000/20000 train_loss: 2.9475 train_time: 5.9m tok/s: 6655694 +3500/20000 train_loss: 2.9675 train_time: 7.2m tok/s: 6408299 +4000/20000 train_loss: 2.9091 train_time: 8.4m tok/s: 6234961 +4000/20000 val_loss: 2.8741 val_bpb: 1.1127 +4500/20000 train_loss: 2.7634 train_time: 9.7m tok/s: 6106541 +4557/20000 val_loss: 2.8129 val_bpb: 1.0889 +stopping_early: wallclock_cap train_time: 588119ms step: 4557/20000 +peak memory allocated: 39045 MiB reserved: 39124 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.80977929 val_bpb:1.08775313 eval_time:7438ms +Serialized model: 135431033 bytes +Code size: 16594 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 12.7s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int8): tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights +Serialized model quantized+brotli: 15976325 bytes +Total submission size quantized+brotli: 15992919 bytes +quantized val_loss:2.84004242 val_bpb:1.09946892 eval_time:25390ms +quantized_sliding_window val_loss:2.79676256 val_bpb:1.08271394 eval_time:121183ms +ttt:start chunks=1238 ttt_lr=0.005 ttt_epochs=3 +quantized_ttt val_loss:2.79241542 val_bpb:1.08103103 eval_time:368074ms diff --git a/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed42.log b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed42.log new file mode 100644 index 0000000000..36a3efa916 --- /dev/null +++ b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed42.log @@ -0,0 +1,150 @@ +W0409 04:38:18.092000 206 torch/distributed/run.py:803] +W0409 04:38:18.092000 206 torch/distributed/run.py:803] ***************************************** +W0409 04:38:18.092000 206 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0409 04:38:18.092000 206 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 8 + embed_clip_sigmas: 20.0 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + enable_looping_at: 0.35 + etlb_clip: 3.0 + etlb_enabled: False + etlb_lr: 0.05 + etlb_steps: 5 + eval_seq_len: 2048 + eval_stride: 64 + gptq_calibration_batches: 64 + gptq_reserve_seconds: 12.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/48389061-b85f-41e3-a259-ad25b5b13359.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.022 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_residual_start: 7 + qk_gain_init: 5.25 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: 48389061-b85f-41e3-a259-ad25b5b13359 + scalar_lr: 0.02 + seed: 42 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + ttt_chunk_tokens: 32768 + ttt_enabled: True + ttt_epochs: 3 + ttt_hash_buckets: 16384 + ttt_hash_embed: True + ttt_lr: 0.005 + ttt_momentum: 0.9 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 4000 + vocab_size: 8192 + warmdown_frac: 0.72 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 128 +val_tokens: 40540160 +model_params:35944536 +gptq:reserving 12s, effective=588000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0090 val_bpb: 3.4877 +1/20000 train_loss: 9.0111 train_time: 0.0m tok/s: 8348323 +2/20000 train_loss: 12.3695 train_time: 0.0m tok/s: 7501070 +3/20000 train_loss: 11.1355 train_time: 0.0m tok/s: 7450650 +4/20000 train_loss: 9.4131 train_time: 0.0m tok/s: 7463659 +5/20000 train_loss: 8.3274 train_time: 0.0m tok/s: 7497425 +500/20000 train_loss: 3.3346 train_time: 0.9m tok/s: 7691255 +1000/20000 train_loss: 3.1948 train_time: 1.7m tok/s: 7698918 +1500/20000 train_loss: 3.1030 train_time: 2.6m tok/s: 7705554 +2000/20000 train_loss: 3.0686 train_time: 3.4m tok/s: 7710490 +layer_loop:enabled step:2018 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 3.0673 train_time: 4.7m tok/s: 7021940 +3000/20000 train_loss: 2.9476 train_time: 5.9m tok/s: 6642896 +3500/20000 train_loss: 2.9672 train_time: 7.2m tok/s: 6396259 +4000/20000 train_loss: 2.9106 train_time: 8.4m tok/s: 6223343 +4000/20000 val_loss: 2.8722 val_bpb: 1.1119 +4500/20000 train_loss: 2.7622 train_time: 9.7m tok/s: 6095957 +4550/20000 val_loss: 2.8119 val_bpb: 1.0886 +stopping_early: wallclock_cap train_time: 588047ms step: 4550/20000 +peak memory allocated: 39045 MiB reserved: 39124 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.80873254 val_bpb:1.08734790 eval_time:7440ms +Serialized model: 135431033 bytes +Code size: 16630 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 12.7s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int8): tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights +Serialized model quantized+brotli: 15975300 bytes +Total submission size quantized+brotli: 15991930 bytes +quantized val_loss:2.84065044 val_bpb:1.09970431 eval_time:25515ms +quantized_sliding_window val_loss:2.79714850 val_bpb:1.08286335 eval_time:120961ms +ttt:start chunks=1238 ttt_lr=0.005 ttt_epochs=3 +quantized_ttt val_loss:2.79180191 val_bpb:1.08079352 eval_time:366525ms diff --git a/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed999.log b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed999.log new file mode 100644 index 0000000000..3711e9b2e9 --- /dev/null +++ b/records/track_10min_16mb/2026-04-09_SP8192_3LayerRecur_ParResid_QK525_LegalTTT/train_seed999.log @@ -0,0 +1,148 @@ +W0409 06:37:24.114000 44333 torch/distributed/run.py:803] +W0409 06:37:24.114000 44333 torch/distributed/run.py:803] ***************************************** +W0409 06:37:24.114000 44333 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0409 06:37:24.114000 44333 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 8 + embed_clip_sigmas: 20.0 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + enable_looping_at: 0.35 + etlb_clip: 3.0 + etlb_enabled: False + etlb_lr: 0.05 + etlb_steps: 5 + eval_seq_len: 2048 + eval_stride: 64 + gptq_calibration_batches: 64 + gptq_reserve_seconds: 12.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/20616371-ddc0-45c3-8421-7f3cda028b2c.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.022 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_residual_start: 7 + qk_gain_init: 5.25 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: 20616371-ddc0-45c3-8421-7f3cda028b2c + scalar_lr: 0.02 + seed: 999 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + ttt_chunk_tokens: 32768 + ttt_enabled: True + ttt_epochs: 3 + ttt_lr: 0.005 + ttt_momentum: 0.9 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 4000 + vocab_size: 8192 + warmdown_frac: 0.72 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 128 +val_tokens: 40540160 +model_params:35944536 +gptq:reserving 12s, effective=588000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0076 val_bpb: 3.4871 +1/20000 train_loss: 9.0093 train_time: 0.0m tok/s: 8343627 +2/20000 train_loss: 12.3279 train_time: 0.0m tok/s: 8070345 +3/20000 train_loss: 11.1545 train_time: 0.0m tok/s: 7890271 +4/20000 train_loss: 9.4800 train_time: 0.0m tok/s: 7814520 +5/20000 train_loss: 8.3867 train_time: 0.0m tok/s: 7769044 +500/20000 train_loss: 3.3423 train_time: 0.9m tok/s: 7693163 +1000/20000 train_loss: 3.1961 train_time: 1.7m tok/s: 7693467 +1500/20000 train_loss: 3.1007 train_time: 2.6m tok/s: 7700057 +2000/20000 train_loss: 3.0684 train_time: 3.4m tok/s: 7704062 +layer_loop:enabled step:2016 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 3.0672 train_time: 4.6m tok/s: 7055482 +3000/20000 train_loss: 2.9468 train_time: 5.9m tok/s: 6666746 +3500/20000 train_loss: 2.9729 train_time: 7.2m tok/s: 6414421 +4000/20000 train_loss: 2.9128 train_time: 8.4m tok/s: 6233960 +4000/20000 val_loss: 2.8745 val_bpb: 1.1128 +4500/20000 train_loss: 2.7632 train_time: 9.7m tok/s: 6104373 +4555/20000 val_loss: 2.8132 val_bpb: 1.0891 +stopping_early: wallclock_cap train_time: 588014ms step: 4555/20000 +peak memory allocated: 39046 MiB reserved: 39070 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.81014279 val_bpb:1.08789385 eval_time:6851ms +Serialized model: 135431033 bytes +Code size: 16594 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 12.7s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int8): tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights +Serialized model quantized+brotli: 15976638 bytes +Total submission size quantized+brotli: 15993232 bytes +quantized val_loss:2.83957454 val_bpb:1.09928779 eval_time:8681ms +quantized_sliding_window val_loss:2.79659675 val_bpb:1.08264975 eval_time:93831ms +ttt:start chunks=1238 ttt_lr=0.005 ttt_epochs=3 +quantized_ttt val_loss:2.79281076 val_bpb:1.08118408 eval_time:318003ms