Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# GDN-Hybrid + Sliding Window Attention + compressed-code warmdown1000 (cold-cache 3-seed mean 1.01671233 BPB)

- **3-seed mean:** **1.01671233 BPB**
- **3-seed std:** **0.00134386 BPB**
- **Best seed:** **1.015700 BPB** (`seed 1337`)
- **Worst seed:** `1.018237 BPB` (`seed 2024`)
- **Artifact size range:** `15,713,422` to `15,903,365` bytes

## Per-seed authoritative results

| Seed | Steps | EMA BPB | Quantized BPB | XSA BPB | Artifact bytes |
|------|------:|--------:|--------------:|--------:|---------------:|
| 42 | 2227 | 1.007164 | 1.016200 | 1.021202 | 15,733,879 |
| 1337 | 2242 | 1.007164 | 1.015700 | 1.020105 | 15,903,365 |
| 2024 | 2227 | 1.009032 | 1.018237 | 1.024111 | 15,713,422 |
| **Mean** | — | **1.007787** | **1.01671233** | **1.021806** | **15,783,555.33** |
| **Std (sample)** | — | — | **0.00134386** | — | — |

## Technique stack

1. **SP1024 tokenizer** with a GDN-hybrid backbone (`[GDN×5] → SWA → [GDN×5] → SWA_shared`).
2. **Fixed-predictor / no-TTT Track-A lane** — no eval-time or pre-quant adaptation in the scored artifact.
3. **MuonEq-R + AdamW** training mix, EMA `0.997`, late QAT threshold `0.15`, and **warmdown=1000**.
4. **GPTQ int6 + zstd-22** packaging.
5. **Compressed-code record packaging** for `train_gpt.py`, `architectures.py`, and `configs.py`, which recovered artifact-size headroom without changing the trained model family.
6. **Sliding-window attention side path** present in-model, but submission authority remains the pulled `quantized_bpb` values above.

This record uses a fixed int6 model with **no TTT, no SLOT, no RLS, and no eval-time adaptation**. All three serialized artifacts are below the 16 MB cap. XSA telemetry is reported for completeness, but the submission authority remains `quantized_bpb`.

Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
import base64, lzma, linecache
globals()['__file__'] = 'architectures.py'
_src = lzma.decompress(base64.b85decode(b'{Wp48S^xk9=GL@E0stWa8~^|S5YJf5;A7bs%Uu8wXbs4O{*J2g^P4X}8A*?nKeo}D<QO>((oAtmlzXSM1Ig2`L`+cv@?w-@3r2bpI{8yz9`;@SXxb8{$fRtA3$`M_7l-P&$A)VpsI<{M1(k?)Sp%fgGHf{OA(e1{%%h>U(XyoMXIPp+maZ<`Y6aE!)~96}-?&KS_k=U~h?SsP>3(#hSXx;Gb}nY(jTA3=m<)>eOpgmMpv(N6Jw4N1{;sYZhPMhU0&O&Da!-;hJL0RkdivHzjmv^(5uZsfYeq27zGmIe`KoOHIQuYUjFMPp8l;u48XK7Wszs%jwMD3Ag^3BFZA|PV)yGfF&9VH-(s2Q|eiU=*2gP#b(g1_keckp+RTd1ebYAlkI_Q(1KdgIzo^2Ajexh6Vo<-BmT64<v4-1-B@Zcy2g-N}82Q<Ea*=hw3AZ>)g?{FY-8_?)s$@TwD&jnwEE94uXTfL)k!^9qGM3)Q`34S(SfW9B%*GWL@QNjM=A7Q9M@<u(gRT%zP@#PE!V{C+UBo+1IHl|$&CKbB<9~V0pkE~35`x<jBHXC}G)3sBisMOXu$^%bGFSi`<f~Bd4%sn={P=e{86L<@A<n--1wB}LOJz25ZlC_=nzccg{Rl}=S)3M6&X1NoN>9S#*t9C)TqR=33=ksZTjaD+X4M-pW80`pt#r_W8U_;|k+Ua{=p6)56RyNUVjjaM=XPL{;3+Y<$iDw6ztH0YE_r!|+t|eW=X<2P0w{@zd5iBFY$CTrD*n+ol+(E42zm}zhsa#B?qtO=*Lc{nP&hfDdkRtQ=Ep3rLq>hVos*{y!iUw=w2kYqLrMkW!EgT-IHvop$I@64dqxYqCV@_2s5AjSTf;6;^g}%+S<b`Qlt)i$@u^$(%=<Fn^$~BO+_|0snc?HA~$-Ti$=^!Tfm~Jpf1xjpCNVbO#!Oq?1Dt<LPzo-LQgJ(5gaj6CQwPbR{*wHUXA6$Oyt`%y#MgUM#!UXz)%o#5@p~zjNN;Jrc;n2+g=%K2-n=5zHKFi4zka>Zf;6p2bFm~5gVdaZ;l6u*JAf0kul^?N2uz0!sG3i*0K+|_D7}U1uB^#T(Yj%9S7V277Q}3e^KEP|TnWsSNT!n98yW)qh?F`Dne{<Yd1xy3SY#uYTFg`Gj+SGdkCCBPoH0p&1O@jA7plGXa)Mq+MjT+dxJqr!g(!@l`OV>&h-HQW_;jl)yQ(JD0+=SXB6lPhIl5y&+;)mb@J7se?c1nHe2f;xjtWiwr%87?ToT05m1D7Ht^7wfyHwC&oUu|2#kt3@McM!NN*Xc(8l3poPj324}xWH7HqS;NT18Ws9RV~4Jb>@o!@qB$3Rod_D-2^PR;B@BW*;pxUq=9KFG^`q&c=d>!=yylYsps!c<pZN}!NhMnN{~GJIrND%t-DTGz)wKBgH4AJY*7d^FyqK~@{a~puQzF;@vMm_nCih6o=v8~gI3otWAl%rWCqUSJ6);n)Ef9N0C|DI4tWZc^)Lw7eHWzO#k_VR3VF}2DvvASmgC<&p`7N=oYe#$d$yl{ynjYr4-5HF`DW8nA-h4Oj1GLGQGH5=dV($~R5)q;33CtFcM1>DqUp&i52VmLj6d^p4h8b^NT6U4z9CbVh$=ki+|kz+>sEkqyPVAkwo(WKMju~!K33A^GmP{d5wpd}cC<c0BF>>rGEH^_v|sZC>pk-|TC&?x5C&1Rrf_c^uk4hePX(VYWmHYq2yxenyjs>Yfo1y@K2Xm;pgPofy-WD7x^L`xCmBLInjBy|B5rtRG7`-X_RQx_vA!=wCi5)p_NE$)u#5le=Kip=y)gOL{>W!g584gaFlfPn2Mm`OuckQXPoC`4ykJQ?t?T~;IPnr@c6(Y0>Xi(9;rSI3(mK55h2yOusT&))QSD}Xr$z@0jT(LP!Z8jfo5%LAlprOO7;9m>d9})Qqc99#5Y$8$n!Oca<Hw2wMR8E-=mr`fW{T_@y8%GS<#1XT8NzM0$g~1tg{9rg%C){FzX(U6X#auj+D$l5jV0Ss-4@%z;)(@gYv6`-<G}F>g9SJ|J(&AkT9tR3=Ep8tYdo=BP$nEgDSOrG2*Uq%bHKitAx#+Ox%d78EPqpwjakDR+Dr!tnkELJ>vP9y;{XLy;%eyP9P@bC*dDiovwN|2OvUL5|ICP)ik0h;e=q<7iKiW2(+=ipjVsA%Pm5lk8}(T+hMK!GrqWo6mWTqJjgnh;h8&w3Z6jO=ORx!i`I#8ku6W*#@azgGJjEFo@ZeoRuq$0W!%8xJ?=Kr=H~3nLT`CZF(}?vt+0&L@w3j`2UpupFq;As;(s%VL1G}Y!UFNn#ueXiv06o}s21CoicTpwuuVeOz75vlEsW%_<QuA+H8P?GACiUkRgMGO|+Ww<*>(K#<Lx3aF=6eqf;3gQ}<ArSb{ItG|J=C!^+q*t@z@9N%AIb?)XEh~26$_$w=Ez+X+!)7r?|0(W11JJQbYgp{wX;2p`1+<a6PiG4Ls3vFit1qyRwLR>f5hlGL#P1P0#0h|z(EZidY3Z8dMji`PH&efpmyR-o?}(jL`3gmDksLIN$l}HL_7S*h~v{e6xsM3a_zc&5~tYyPn|q0(A{fJx}h4x{?J+=Kzj`#?i5q!K6NZy1OIDyReIZr^o8JPPlV6hY@fJB8vbE<uqRaaJsmhpP6|#JOX%v4D2(Wf(shGg?MvQ=6d~&e*{G<8y|(1M#g!t5n>&JuS72MPoTMbu{uoW)YXTJy_`|NXcDT1mgP%}3oj$nc?vPe-m|cPH5DvMkj=T`32U1H?b&!Cai5c@^>mFTtU(q;h_Srv2GN+>N-!`S-qe`6yhU`4v&-`Zf1vmg6DOleRZ~=7PiYtq0xqkDAs1^CmavP3W=1rYCO^60AD17J<L1>NN*DO@eqx3EQ`meU=>bQ+}A^PKmF3)1d@zn1FqVE*3t2MZ>lDV(JO&LEMhl)<l32)#M#1RNJ)C>?7qmOO%lI&Byxq`w__IVlA+oaI4yAP~CY1%&f`4TTAxcT6#bGA?~utOF<)!z7&Uc5J8o&=;`I0_c%&O7lxa`;KyQBtc=v;6(+C~@~I-0h)Pklxj2{t+K*0b^vtA%r|VB(kekV4P=(+)bGWe5QIezx}!<l1b0*-wzK1=FCT~!2tVA@ymFeM5?7X$C&)cI{1Mc`Zb5kmI5bN3kYEo;2)W!yPdA$zuRWE3KcS=sx%~u!%F@ng!{-FQ`QD(4mA$qsus5B9=cGp{;b6Yw~%2JW{ByNvPmG2-5)2OY?mEW;J8xTI@joSgNEPnFu80XHXG3+Ruqz&wO5iL=j}1g?$C0lHVBSXcj(wl!9C$_wKO+d__}*yp%o7CYJkx=!`H&iX54R9WZ<Pp{FPkFtQ{BB=F=-yd)C5%4CxJ|)#?7LEAKjzcSYqhpT~wSzVS9Gwc@%~8J^h~iK5ZxkBw+Tn?h@zscii#g?$D1g<rN<s%5;fFA9L#Vmh1eK5sdGEI0F;P(?V$u@x2lr}QxTI~SB~q8uP4S@W>&X}#)nCr|F>@$j+G<rRiFM|u!;tBkA>n&w86a8m4=i_-z``GsN9()Fg#o2F|P*D)ivEjUV>Ii^x)CY)P!er`IvAsr1PSZrqUwq*?>JMP-aa(VP?Y`}EA0{pD_VYM>t&J4lqU#?>MC-ZE94FVL<UIn3rr>#)WRp}ym5u$zku}HFYK^f|e8*?;gE$fQ$g-=xai0%n#7!{L^7gyoh_RQ-uSD6Zr`O8?{`Bs>&WZEo2&+eRHw2LpjTu+#ef^s_DBpj&NUua7j=EgZH(bR945C$%@ZZLQ^c-gF0bMI_BtQxdZC>LFk-5F^;S3~Lq6MdhzPLYzlUvzGWJJKMFeq1Ne7!$^F+-(Fpeyhelf_G6bHf8;ZGVidY_ko-k8M@de+KHwM3|h})k)Y>0rzcDk%{k?+2Om?kA`nFUp5ydxZ3HUfwvftwoBG}<oty@%e%lva9h!-%pz4e;1g(h(1msuu;VxLq21iM*9eHDHWkAo6wpPCxRb6yGBp;MR#KYxw(s?JC^$@df3cEI%5y=cBh731+Zir^v?DT}*Ve(F{RKg$9f=^Xsz5TZ_%4hrhDvlx6v24c{#4`CLpuRALavL;Xoh&XZ+5Rl6TdpM4y8ONI!~Rv^)2^r0VWdSdnv1_-;$3pdnB$p2{hUs70TaP>Dk0$%IJEGyMm@zZm~kF|5PF;9`zDGw7gLuzP82|o0yyo`ZdH>sL3u2P&o7}?C2C^PZ3myM-DPK)MS^7ikg0XYSiiNc1DMuW8$WCb*7Iy^>jLH$!uP3v5*g*eN=L2B^7pl9@53$H&cEV5i`ev*&$3ZCZi2<gGa1Th!a8>zhED2vUfz<p_qol0K$%a1DJ%Ltvte)4^}D2LG(7IWip+DHZ1{U-*2uNm&Z(6jcAq|R&VSIAFLrdY%kCdG5EMYlwN+{yZ9JQ$1{P!~n>7f`VwL+_`2+UD0LAy^9r;IcJp|FF<R(;JEM9+`-S=ldW>H2_YRp42GchCW1&r>Llk7as4FQh#y~L;lp20-;&-ZSzR*4Nt@MU~O5hK(+@9rQ?Ofq`;i)w<&D;(t-%agZq-U3vMviP}g<pm#*oHTqULUUq60Xhy71;cYy%X!e0C9@Ud*Srx!i3;5)-FbC3vmCq6g(AeH`DByozcER*b`$1&ed7Qh>Icwv0ry_`#uSdQa()%F&Jy%ptAx}+@iu)78qNMfI(?JEO*f5)O-n>(VW&5IjzA*lWs(S>6uZ6TNOYKN11|YqS)(ljRR94GHeG+uJ#ipZp3!F3nI9S@&0Gxu3d(YkjaPn#O4sJUC4@~eG{3e%kMierjR2g_`htusm|mB5)Y?)NRB|>XYVr$4od!rGOhzb6HXk$Fe4-jnr=?AU#HNpv8lqu0KpeomI8)I!PU*_}Eqo1&q3Em@EqlI>MPuI$R7<?B5j)1u*yJ?aQlc<$DJ7386vN6rkVjfE=VMcK=8M9J88EqzsMBTUS&gUOhh+96&b9;_Wqv8&IzL4<<~fAPz>Q+D2pCybW{OtorvR;iBdUlx1MkRW{}FHsjWaS*(-GE973A#%V9?Yc7$fNmwpIzh7;7cbO3eVQNitYRK{}WiK3SPBZ>PRZP?i>OvK#oCGXWcm_&w?4xFI9>P-&+H*XbbRJ9k3r)&YxqN9*_OI=3JS9JgRy*CZ5C&7>Ews3w-?#Ea@1iy}1bEZ`+4vmPSXPoc|lejLHIoU%sL-w$ygnC_j6<<n)-wnN{R!gD=nK2btfMoQ$d8{(35!I;*)i#>DjfUvISBKCOYpjiYgd&995_C?GN6Tj=MXumdE5pq;`PK^FiVa9Q2b?>*e-MtQdhH-xZA`VjHtAZ?Pt@h!(YS3dvQu5{RF_8?i(*qKG;qz0H;1LWnQMd=^6x|yTDQ~|X0DkYu9DJfZO@AR&Gb^Y$o-t{50rkVm!Y@uz>f8EqSmB2np~O?XMG_+ZMfKa2SY@gBv(FynMU>}>%1uFDlhkXvHMlNP$*x+iS_;Xve#O;2d3aS3wPO29oe08K^Ykwt&1H=ERDD0`tc2sQf&m+B=_=DO|4m7TX+9lC96R2FE-L*0fnx?-Yf)>xbdtyp@o?8WsT!Jq@#D#s8Qr2U;pU>~21-#ZvNj+h7D1Sbqp!S`=%xcJ=Os-b;Pew)*Cp>J2-16oU!O;DR43+)p~a=k>+o>%p-CFfl}-RVA75zTo`{rqu@y{M_E*GbenUK0kRh@Kp%<-9-y60jZ5HOD25_4GD4VJ`%Rr~TIb1M@8^lLdE`8p1LCW-LH}ZV(B==Tsh`MNvfIM`{$NU?47L$t*QjoP2>LCt+d#1qTPVCCClPGMc9Xf?xWeG?64q(0tV^SEQX<vB)twN4sVp}B3P{OOV<Qgc>yRdKp@(>@x3=2n)HVdb*;ie%yr1H!y%vxF9jaP^8RUz9eCG@_b9{?rKWpQ{<W)Qze^=XHO2S;IX0E?6?T)>BL3?J0l`29|+<>~K*M1f6(vNaQN8rt?%u$T7BSZX)^TNcDPvRves2A%<fY`@JjTqAb8Q#$fUQw1Q~nb|c6ptd9E`dE-QUXtXz_4XAY86>%MpG0(Sr=Gv4j5_t50%ibaVnFY{*L5~!ZHYS#-}c<Rx_IR1k@c#GX6c@)Am*R0Z7K8Wa;r!Zr;k67S=93_w~)qx6-X4KB@94!R!rDSwyl%#=w~=a8bqlgYbKh3_9YZ4SOCLj=nLyz1}~czhtDE<b;mEPO%)vqvYa=Rs#qFmKY~)gfyttDcc`~=hW&&S8-8PRgq|qg_cf1jLFFyG`Cw&6-$V}yKg1qTgvW;l{&;jCL`W&I<rmAw_|ocF#UVmgba=UNn`z@QlJuOU&2&meSjdnCgsm$S6tDMhue29@p-J>9N7Jyv2+%rQU|QbiWnQodW6%sie}?-af1QRSSL2lC8mVgVj3|}%9j2SWE-UJA>GNU0AN2Fw3ifb(bL8d{T!^))d^Y)H5AkbE(EMiY$)x&GQ6Fpisw<+OKXb9`jj;>`P#l3#%Rb}g<ClJ0`%(0vTpQkj;)RMW9O}f7fLRP7Qz#9iHeKu@$$WYbTQes(pGET4_D`kiGl$6Tq*dc!&~-axVfq^XlX~s(I1;4ArdA*Pg`C4px^mH_Z%M~r;tNG2RS?jLqzq5G;Gku<0+h*^&K+QKh{Di<XU1@{+5_S4Q%PCRIEF1^`RIQPlj?D%-of1g&+Mp&(FwA<k6bbM@hR^RI7L#|<Q@gTSQ_5lp9*QbX?DE41d|X6F!KfybTpa(0}au__NJ5l8B{><u9SwridQDmraT+}YMQ6A-5y?q4AV}!FShrPOXHo9_kZYeN+-Zyt?w|G#JLyzJgCc0*jz~4x!zysE+F|cUo0i6XH#{eIfK`V5mnOFx%=DRT$%aV6?m=OTjSp1Wsjqy{0(gZV1%rJW!K-)dFa9rBEY#=y|P2(ZGILJ8vn!u08qkpR3{2O$T;n<LENL`^9Gs3{o`CUj%8QKx4DhWy^N3s9RhwQgUo9O>Lk21)TkQrqFCfO7)__6_EQiI+LeqisY={74y)gr+Z0OWyXJ|O`3v*~^^#A3&HNYKi2_#&l)GuH(e$VkJqE~bRpE6D8pj)1)Pa<JTBMg<49e`0RNtp@Jhj9$fWdaC2FJRx_GKiEg+W86emX=yb*~#V)xwXnFU)#SB4;8zA$}aUzuIlw!#;)NfR}v+)j7EKF?~`(G#x-wfrF=Ko=pObG!AJ3=B$1do+>3IA)${Bb9W;KBJ)pDVVL7{cdGE*T>C_PE}ZNHWrUbarV+~pL=7j!>NZxVqBVT(ICrMygQ-N8G;v*LAj0e>5Mhjq#(Z`Z;&fxC_9TQz=x<$7AOogW^`A7jJW;_r5IqZ$Bb|>@v~B&7&NGAQJ4O<;?uCxMSV=qU-Ism)5zjl`o^=yLf>(64MGO^z9M~$?N|q;%{j1wxL+6ctZ)TK40ELOxc^V|EGyEa=CqXK9IFaHo0cR&X)e<Lu1m36Wl;56UD7o=bt`6Cjq0r7rx&eo_wr2%+tVfSa{`{2<hhw0~_X4{<ScaO3v$EHM#UdJ$z}cRS{bkrMg}6sb*j3$9`Q*N|bGanaGu~-E1O==`R%5%EW3R^lFaeSl#pSuR`~b>QI+?Osg~f9j(P7w=>`hhB2oLhAwba|b^l$6Zw{BB3D%pICv~{&e^*@?yX%`kvM*}lFXjXF`QXN`V%%Lbi5gECrFIu$^j*rjGdh6c33!Lnx0H156gF$<(UCSY8hnwRqTu+?Q(Yq2{RF_@!|Cfhz`6nO321;32A40s#PAQid&SP&s=*eQ2cuA}g8ZFv;cP)S)yKG-wFkwU#LP614f$)#|>(^5-@fr-vlPHx&J@S}O8Q&gSdvrEOD?~m-oDbtouDTi?zxg||zS{xX#D@l3fs{L+IaFLTS7OEswe)16cOv!C0S-=R0c-51+&1Q;i7mT}4?&S3bFa(M8X-rgpf63<&Qa%5?XSs@RXSBPYB&La_icC`VZDlfF>rP?=<BU%sR&g4w(M&zgu`*!y|00<j)_f{7)~rc+}aJ!0s_R>TAq|bXq!vl_}3^Sdt_RTm|e4Jqe*@p^8H*fh-p$FYdl5_a%6$faS5bcqrL3XLP{O#VFkV&)A6g~_cTjOAOGor+Rs^v0Wpui4Aav$W_SPq!EW3W;+CKj00HMQ+Q$I^sD?+7vBYQl0ssI200dcD')).decode()
linecache.cache['architectures.py'] = (len(_src), None, _src.splitlines(True), 'architectures.py')
exec(compile(_src, 'architectures.py', 'exec'))
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
import base64, lzma, linecache
globals()['__file__'] = 'configs.py'
_src = lzma.decompress(base64.b85decode(b'{Wp48S^xk9=GL@E0stWa8~^|S5YJf5;1K==%Uu8wXb(;1LpQ-~@JH;7bq<!2rM>sfvNj2i-6-~M*R0t23rwIc7?+`^!VD6uagcoK6DbS{9T~420xAWik!$biWb)rrE+C-SGfkiwhV0~r&>A7MzvaxD&2QbJ{2wIdK)(s%&mtZ-@C^j&Il;>(HV9Kdzl26iSpQ@}+W8F+gFd4uyfywfyMFM`tZ6yPk2@}Bf`F{`G|QqY_~SLO)^$j(lVw5|@~26gr#@Cy$?g#DNu)(MZ3zM3D848h-pG^4;H!1seE8;!)*!vr<*hh_M)n_J*FkAv3&y+~EaG*xEwZ(y0{Dluk1)M?vK|<?M15&9sh9@)sm=9EQ!l9<F=3hI{V_#FZb(VhKFRJTCvj$idas@TZXBNDLU;Z*-1PC|^dt;z;sGN1$)2AK@%4k~y7Ij08HFB(E_tT8(BIU{1VYBaM8e3);f(C<O!9w?`>r-z=2q;$f?Np>RAmtd0SnvDX9u|NdpKRv1XCu)-hRaPhu(+3vIVgK=%?W`h<nrj@^ClP57G3H$Ka_p0^Lhhy){F)Rzt1mZ4=&&wEnHFK4UVYgugE;)jv{i-?&ldyTBS(mqu&f5X^|fs%L$&s&;0h<_JOdpj5J4B-WfP{dfC9+{^sg^v5;>*Ir!o`XM-gyZHpY(;A=WQ%uIy-LEpk?32}<@uwZHb{=xO>@YgZg&n)w-2CzKB!f!PX4{sbfc`i@s;=Z)Js>=-iM6SYI+Q(`U`M<IVh8XMFDd+B`sqIQM5;%)xR;J+D72uYc+Iz<mDBMb+;NowsU><Q-aeFf6J}m)nODlO?Mv84+2(r)&&4T&86TFCk=yXWPs~%Vy-LioO;Yd>gV{}<wkto7{`3JW|8UjLBhjgROZzy&cg`aRWW%}9i{#DRsIpaD9_Va<3?@d{W`OB1T5`_6m`oCMDbS3M7BzqM<WQX(Nvd|X!atM(ztWo!P$;vfz$#P<5xGfqqg7;Yr-so}n=-sgRPRsFA9Fl0F%Pb@36LC1gv#~HAKY1Ml^jurbyw~$C$RHK^VlPtr!MLXq3wTKOCFZ_P3-79YWYpaCz_BWcE_YNptPThG8=ZVmwW2lGk<RbZgtH|@~c!AI7~_|56dJYhXhj_=hL<YLZz{c3mGo}L36^tMzj=fJx_u*k6!31n!ByTnf?<Me-7elw4W;X!EBx6Cs~oZ(X}TztV4Z82EEqhG0{8x{QBoG9kwpH*|JMc$m8jCmaz7f5-J%nY(HgqSIzgomvEiNbsB~$Tk@6G8mHWLI=%U?eAy?aPo6mX!x#wD74cjcK>Z-mMs{`4u_n2wsGC14_Pi&1O3GbJ82-<=%ePMG6(KXaL$5W57RMv}0_vMs`@UkDBTH-R%ko})Yg?AOCht_=5RD#cvY2)jQ&tE*`7?1>pJTzY%sLiRkLAqYJr0qyGdhd~uy@0D5xG=3C?waVq~$|Q#!<?>JCX&-oh{sufJ>q}>z-_UbAeQ3l;2VRB$MCD-_NXhRZ#7v%p&MFs!t`_Y$`GiU4-YOmqjOJm$ZOflTGRdW}mWmDGGLnOW%_=;rSv3Gm$hgn<~9@hs)#-g{3+d;Vo~}ax{{l%l=VDe2Bo^B8Lcu<75bM-hIE)J8?E}g2~HGoyh5%fYjL46`UGK$i-9Ta^wXSK-7bJn%)z#XM+i(MFA#)^J?TaE?@kIZ=V4zfp$|^j3BuWKRuaz*OmfluO>S6^XY2?e9Z3q2+1CVv*C!s#1PtU@V=DVxEFsS>8lv-tc~<X(o7~cC62?Guo8WU2W#K{m_c%|lavV8mW;Tmc+d*b>n;l5Q+cxVMb98Ou7vHP5Xf53+_dNGNVPzcF-5<Y@s`%l@sGT&_6tnrf2ac(0YTLqkxs8D|5jppc#LkSCX%PAd`v&<7r1YYXy}Hg@^(PC)Kz<^CrSVSBK&Om4$c`m00HL<{~-VX(Bw{7vBYQl0ssI200dcD')).decode()
linecache.cache['configs.py'] = (len(_src), None, _src.splitlines(True), 'configs.py')
exec(compile(_src, 'configs.py', 'exec'))
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
flash-linear-attention
zstandard
sentencepiece
# flash_attn_3 is pre-installed in runpod/parameter-golf:latest
# torch, numpy, and other standard deps are pre-installed in the image
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
{
"name": "GDN-Hybrid + Sliding Window Attention + compressed-code warmdown1000",
"val_bpb": 1.01671233,
"bytes_total": 15903365,
"blurb": "SP1024 GDN-Hybrid fixed-predictor run with warmdown=1000 and compressed-code packaging. Three-seed mean 1.01671233 across seeds 42/1337/2024, with all artifacts under the 16,000,000-byte cap.",
"author": "Joshua Martinez",
"github_id": "joshkmartinez",
"date": "2026-04-12",
"base_pr": 1545,
"val_bpb_method": "quantized_bpb",
"seeds": {
"42": {
"val_bpb": 1.016200,
"bytes_total": 15733879,
"steps": 2227
},
"1337": {
"val_bpb": 1.015700,
"bytes_total": 15903365,
"steps": 2242
},
"2024": {
"val_bpb": 1.018237,
"bytes_total": 15713422,
"steps": 2227
}
},
"mean_bpb": 1.01671233,
"std_bpb": 0.00134386,
"architecture": "SP1024 GDN-Hybrid ([GDN×5] -> SWA -> [GDN×5] -> SWA_shared), MuonEq-R + AdamW, EMA 0.997, warmdown=1000, GPTQ int6 + zstd-22, compressed-code packaging.",
"compliance": {
"artifact_under_16mb": true,
"training_under_600s": true,
"fixed_predictor": true,
"no_ttt": true,
"no_slot": true,
"no_rls": true,
"three_seeds": true
}
}

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
W0412 21:30:56.460000 842497 torch/distributed/run.py:851]
W0412 21:30:56.460000 842497 torch/distributed/run.py:851] *****************************************
W0412 21:30:56.460000 842497 torch/distributed/run.py:851] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0412 21:30:56.460000 842497 torch/distributed/run.py:851] *****************************************
=== Direction-5: GDN Hybrid Training ===
Arch: D_GDN_Hybrid (ARCH_MODE=D)
Seed: 1337, Max steps: 9999, Warmdown: 1000
Train seq_len: 2048, Wallclock budget: 590.0s
QK_GAIN_INIT: 5.0
World size: 8, Grad accum: 1
EMA decay: 0.997, SWA: True (every 50)
Late QAT threshold: 0.15
MuonEq-R: enabled
Validation tokens: 62,021,632
Model built in 0.2s
Parameters: {'embedding': 925697, 'recurrent': 13251360, 'attention': 792584, 'mlp': 18880512, 'other': 12800, 'total': 33862953}
Total params: 33,862,953
Matrix params: 33,263,616
Scalar params: 75,049
Embed params: 524,288
Generated coprime shard order: stride across 80 shards

================================================================================
Starting training: max 9999 steps (from step 0)
Wallclock budget: 590.0s
================================================================================

step 1/9999 | loss 6.9316 | lr_mul 0.0500 | mom 0.850 | 0.38 steps/s | 3s
step 2/9999 | loss 6.7037 | lr_mul 0.1000 | mom 0.850 | 0.68 steps/s | 3s
step 3/9999 | loss 6.1494 | lr_mul 0.1500 | mom 0.851 | 0.94 steps/s | 3s
step 4/9999 | loss 5.8672 | lr_mul 0.2000 | mom 0.851 | 1.16 steps/s | 3s
step 5/9999 | loss 5.8456 | lr_mul 0.2500 | mom 0.851 | 1.35 steps/s | 4s
step 6/9999 | loss 5.7912 | lr_mul 0.3000 | mom 0.851 | 1.51 steps/s | 4s
step 7/9999 | loss 5.7134 | lr_mul 0.3500 | mom 0.851 | 1.66 steps/s | 4s
step 8/9999 | loss 5.7339 | lr_mul 0.4000 | mom 0.852 | 1.78 steps/s | 4s
step 9/9999 | loss 5.6268 | lr_mul 0.4500 | mom 0.852 | 1.90 steps/s | 5s
step 10/9999 | loss 5.5281 | lr_mul 0.5000 | mom 0.852 | 2.00 steps/s | 5s
step 100/9999 | loss 3.6791 | lr_mul 1.0000 | mom 0.870 | 3.51 steps/s | 29s
step 200/9999 | loss 2.7052 | lr_mul 1.0000 | mom 0.890 | 3.65 steps/s | 55s
step 300/9999 | loss 2.4796 | lr_mul 1.0000 | mom 0.910 | 3.69 steps/s | 81s
step 400/9999 | loss 2.3839 | lr_mul 1.0000 | mom 0.930 | 3.72 steps/s | 108s
step 500/9999 | loss 2.3145 | lr_mul 1.0000 | mom 0.950 | 3.74 steps/s | 134s
step 600/9999 | loss 2.2768 | lr_mul 1.0000 | mom 0.950 | 3.75 steps/s | 160s
step 700/9999 | loss 2.2446 | lr_mul 1.0000 | mom 0.950 | 3.76 steps/s | 186s
step 800/9999 | loss 2.2263 | lr_mul 1.0000 | mom 0.950 | 3.77 steps/s | 212s
step 900/9999 | loss 2.2322 | lr_mul 1.0000 | mom 0.950 | 3.77 steps/s | 239s
step 1000/9999 | loss 2.1975 | lr_mul 1.0000 | mom 0.950 | 3.78 steps/s | 265s
step 1100/9999 | loss 2.1804 | lr_mul 1.0000 | mom 0.950 | 3.78 steps/s | 291s
step 1200/9999 | loss 2.1805 | lr_mul 1.0000 | mom 0.950 | 3.78 steps/s | 317s
step 1300/9999 | loss 2.1583 | lr_mul 1.0000 | mom 0.950 | 3.79 steps/s | 343s
step 1400/9999 | loss 2.1301 | lr_mul 1.0000 | mom 0.950 | 3.79 steps/s | 370s
step 1500/9999 | loss 2.1270 | lr_mul 1.0000 | mom 0.950 | 3.79 steps/s | 396s
step 1600/9999 | loss 2.1350 | lr_mul 1.0000 | mom 0.950 | 3.79 steps/s | 422s
step 1700/9999 | loss 2.1260 | lr_mul 1.0000 | mom 0.950 | 3.79 steps/s | 448s
step 1800/9999 | loss 2.1136 | lr_mul 1.0000 | mom 0.950 | 3.79 steps/s | 474s
step 1900/9999 | loss 2.1167 | lr_mul 1.0000 | mom 0.950 | 3.80 steps/s | 500s
step 2000/9999 | loss 2.1099 | lr_mul 1.0000 | mom 0.950 | 3.80 steps/s | 527s
step 2100/9999 | loss 2.0972 | lr_mul 1.0000 | mom 0.950 | 3.80 steps/s | 553s
step 2200/9999 | loss 2.1001 | lr_mul 1.0000 | mom 0.950 | 3.80 steps/s | 579s
Wallclock limit reached (590s), will stop after this step
Stopping early at step 2242 (wallclock limit)

Training complete in 590s (2242 steps)
Peak memory: 35748 MiB
Steps/sec: 3.80

=== Applying EMA weights ===
EMA BPB (no XSA): 1.007164
Saved raw EMA model

=== GPTQ: generating autoregressive calibration data ===
GPTQ: generated 64 sequences, collecting hessians...
GPTQ: collected hessians for 29 layers

=== Quantizing to int6 + zstd-22 ===
Artifact: 15,903,365 bytes (15.17 MB)

=== Roundtrip Validation (quantized model) ===
Quantized BPB (no XSA): 1.015700
Quantization degradation: +0.008536
Quantized BPB (XSA-all): 1.020105

================================================================================
FINAL RESULTS — D_GDN_Hybrid seed=1337
Training: 2242 steps, 590s
EMA BPB: 1.007164
Quantized BPB: 1.015700
XSA BPB: 1.020105
Artifact: /root/pg-repo/records/track_10min_16mb/2026-04-12_JM_GDN_Hybrid_Warmdown1000_CompressedCode_Seeds42_1337_2024/checkpoints/final_model_D_GDN_Hybrid_seed1337.int6.ptz
================================================================================
Loading