From d0bec2ecddd8da5c7e4f4e0794b59e63e0ff5a5c Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Wed, 26 Jul 2023 20:28:34 +0200 Subject: [PATCH] DISTPG-557 Updated HA doc on DEB and RHEL (#396) DISTPG-557 Updated HA availability doc on DEB and RHEL modified: docs/solutions/ha-setup-apt.md modified: docs/solutions/ha-setup-yum.md --- .../diagrams/ha-architecture-patroni.png | Bin 0 -> 32695 bytes docs/solutions/ha-setup-apt.md | 342 +++++++++------- docs/solutions/ha-setup-yum.md | 368 ++++++++++++------ docs/solutions/high-availability.md | 51 ++- 4 files changed, 490 insertions(+), 271 deletions(-) create mode 100644 docs/_images/diagrams/ha-architecture-patroni.png diff --git a/docs/_images/diagrams/ha-architecture-patroni.png b/docs/_images/diagrams/ha-architecture-patroni.png new file mode 100644 index 0000000000000000000000000000000000000000..258aa1443a6debf8e164b11da30d5550256704ca GIT binary patch literal 32695 zcmeFZbyQVd6fa6jNeBo6Qj$t{x6&moNGsi42RKLwh=@o?Bhua7DcyZY0jWcCfWv{i z@%!|#{1)59KvSpwdY!U=5Nk558>*na=6&!*hol7xC-*mHIR@{Oo0D< znD>AYXJ*+izz;M_DHSOsr1EH-Yg2UKcN%kf4HYCLA4Vjkz+fb#D_|(_7ZQ>?Clb<* z2@;ZUA`%j*Q%b{2QQ(I9TRjB}6&0jsz%?ck3Nj%QDsY7i{6h+}Mne02jf4bH0=`f) zkpH`zf%4}rib)3Q-)p|#6OrNO0FhVLT6%7JD$2sNGNrE^lf0**V!cpNM1A(9npw zys;40crN?z%XrBydcNj z9S$ybPL98{fk#E}MupW}tlt7F-_aN668$~%e;NCC9Z`ZgrY=zBW-$y5`m&ok#WjM;Df1vHZCKKML&0YIn+4PSXp@hhr*#P zZ~I|>;*~tEn}bi6c}7H9y@$WP+QK9VL_(uMLdKLtLis}1}j`c7|@*@(; z2Nq-$E~YYg)eo*x+Nnqi@D{c2QwQyD;sk+`!AK~qD8Yu9T`_GA6O7&E`jby{&V)@mG3aPGmH_ z#Cr>WWto8oJmVgy{NV2hjWQB4NlrWeU$N(*0?$y=Xrlcc!6cFl46!c5{k!o0pIiAw zx@1a$LLkosW;3X%L&w=_oU+wzr1YE}vu}kYy77O}kplQ3kBKa5g0;xrBti>dV!-O^Y4j8L8>L(lbtAW@4Hm+$bwLA7(i*2D0zbM0i~ zI@TImB=9L9i0oVrYb>+|f{Ts>iS-ueZCyzF1fH`*%c+ZJH-3prZmLUYBMjWySjpj) z?h9JqH5`r!vug$6{~}zTc#lC{zuXK<;aU9rHqtlgMd?T^f%fct4WHgY^Ojca#77Z~ zg=Peiy>a#BS&e@RJ~C!kERxKpqdig^bg&VT*ZHO8V(IOHjR2z0!gB*ZVEf+vM+85; zJGhwY!$nlwmT!M}Led>t#+tWI1NOD1(YO3(4Ohe$pR(JFNL&r8*-n3FJuD;>WTj)R6+Je%okoy#)ujF4*x5f?#sX9s#W| z2hWKEM|VQZXwmbs#v8R6^bNnmy#Uu%L}xWL!uYytzhxH+rI9%Go^5w@KG_^`?Hv&u z+ZW9|EL%pW78_yHZ|Vd6LM27xz`X8=M(Ho6kxe1dKN^mU5en+$9>;;0{NhUqG3b8@c64Z|JzJrPg*O z#Qmjc$VcD-)GoD4&Ptrr4j25L@9H$%lj}~tv+p-{&U9YbFICJn!%!{jLP}ZPj4vjY z1-wR0JU6B!+?i6HQ{W55u*Ou+6ur;OzQckO1wY6E>m~zSRKM_-P02RQri<}NhvTjv zH49m1r6q_EgP$)3sc&G>&OtmE0YgP5kA~A#aP1v4M$DF{7EAn*fM)ZWY{7UY+dw;o%lvmn%JRqVmydbR{PlF4NVL2a;{ps*B)+!K&fs&2L`F6&76YtHAVG}LC>FbPkAT_~R z^oJ9Zm{1rkuM8Nhh`XJmgD*6e3P$!MJiYq*^OJ{Jh|~Pf^J(;rBykA85c(x$S!N1c zv*J$!QLh5?kmOCpRph*Mh8;?y+8o_sOOat?&VUf%%PZcHP3k!8LfulRi{42Qy`D#^ zC_{j<^Xh1=n)6Fi)m(e;Ho)apl@7dcn2XqOn7_ejh)xN3$@`UBl@jvQVZA4+y=BOJ z!Qu2gH5z`YgYDRIDVuw7IGn^M#7`%;*<`nBKF?n3Fm_@WaF(RqzFB74$@)ys(Z}4J z<_TNHe>+@3R?-Q0X@L1fCF3&BJcV)17`kN+>QM8?4_jV73}}ab$co05^PEeo7$CmI zE%k-8royNl2G7mktw$!XdL(7>`J8$fjvI8v7YiLE?I-9jkNNnngfv{kb?b?u#h%Jv zg-P7B3_KkXg}LPlsszrALI5QRy2x#+Rd6k_Tf165XcT$B44M%WP9$8YFO0t7W@B#xVt^;qL0NWCRk1@d%+n<1ytPPh7^^0>nzQd|;$*d9%koah*-@-w zcL3AFf#G`Z@jtBp@5Lk?pX}I+gB+x(DW|&Japw%LIvs^TtXj+We0!fBuzpS7)PL%C z^LeDMH_eN;Z=Ei^uFVhu#5b;m4YXH3?q5r(i0Y2^>dg=z_Si3o_u4|6wjiIgBO&&c zj_$^n(?u(fan0vBe-70mgb2QHwD8U#V4kfcUa%s6#U_7*@XT>FN1eAnG+Dfr_DE=` zHIL25lAt5eKGIZvPb269n(gns|r!9KKMekO!ej5aOAa2 zYLXg^?^gDE&}RPw8YJ)@VH;{Zq$PKQjOK8;vSq&~ifn7SsZlf$(@|wQIp|6`*gKNd zXg4S4;}*AfAUk3n2I-Z6rMDZr7cLRNExp<1yQFOgY~o&e&?pxUdC=q*Eirx2&-|-N zQt?gV+meYT`yWAPnHZEL%6_*wx04|xprthyew1Im96HxptU8wf7Un~pdn+_Ol@={(f&<< z=&a{kOF~YD!e#ajiM?^+GTyp}lMmNQhZ<;>6127|<$rb3>3G|}AYNLmzrF0S$d^c+?HbQ&xaRRl zqFp7a;T`Dy>)8?+ji2nZ3j4JYxvwHC9OzIa+of|H8if7AHYFYG2~k&*RfjfZrDAx9@=5x6wj)*(b0C%Ui1SQURPsqUT9(sMD-UT^TO|B8l+` z9Yb1dNmO~f*VtDf6}567A8l6=V?(Bj8_#jcdGmCyai+orF(ql{(1d%)KSS_r0KD`s zL?)C$!q+ovh=`D;X+)z+LFlyyJbm-L1u=C{Uu@udYqhYZcL%TE^8f)S#W$=0z?R71 z@|zby8m0>|3-Q8(Tz8bp>Dmb1>@@^q-$Bp24C<2ykONe~IQQTEn@-Vq0Jyt+m+%?d zze`KbW6`vYFLhSo|C?xmXCFugc9$sS5&Xg3)=Ys&O1}6L;(s=4CI>+EM)pkUKj8g) zDi8?v#gyTHQnvS$z&`)C+DL}+p3UyKd)bru`e2bP{W2bb-6I`sDEXn2{H;|rLIO(gu_{8tn_1=7x@zD?1eGNMklj7Gp=T5xYe`WNZ*A>3FO=47k8e3z!8N|A2LO7pz9ed zhzFD=!jW9t)+Q+?(NPGPvYL{;W>idqd&Y@k&i9MEQMN!}B@f4BEc03PrWON0Qj~V2@y<3M7XOr;;7u`c(w-P8`ZEG`3 zlbFEnq}rN{YS?4G$o%Ci63WyQg7YA47PaF$+g4WtX74B@R{$gn2o8mhdB{wsUE|wa%k{Q%G*q;-jc%UKD= z#+?VvBSGe9sabUHr8JlZi1&m|$vt-N67Wat8XLbVzW#xMg2`C|HhM7Yy_K2{I`CZ< zvH3NcxiFF;tpSCB5ByShaH(+w^HSn?Ku-B;Z@ zS88$fHr`ktAuUe;p3bHSny>K!KOiBCxQ@0x%`t=8cOM4BSAtOj@RGUt5B4iMDTz)D zIk}eC>=6yrp-sE_9|Lymk7}lyxY)>BzId{eyA^4Dy7-K#`3Z@Igx()JB!$@p`VBuN zjZ}!xIAlC*y(5S@p%>S1I6Q|mAbf@T`uHm2+1O2)*VGHC4t3{vwK`?W=p5b0w9aVy zeo<%e`JDukRstFV0B$Sx`Q}5yo3UT{Mdvt*o55r-_pvQ^s{-6 zh*fNt_^=89H$(Moz~Gyz5nDJjk(aQ;H^l{PQ`*FVgHu1M)vH%N+2x`^_b+0JQP^6( z5gL)4HYx^6rUB_!^pg{cE9cSZ7q0N}of%9nd|HKe@rJAZk1Wl#`Rm4*uP8d!6rD*^ zwY&t?dLzPl!$lDTPe2N-2cJX0d1IVbUQ|iLrLtw2a^9Oi5!}c+&koAobK!r%`X;^g zikxzyCVNUf>s$qMyADl9Q2tTAC>F1GdDXAS;k@?ApLk4jU9-#-;UWE#_1I(F@8oZ~ zz5-%?t>=|rQoJ8ogGGP|kGS;UKnRf2sH0n9lbJ#VfB5mj!AJDHbo@ zVI3Piit;yHK6XiMIXU}kytHw1_={zK$g59b@_Ogn$AiG_K*kNv_bFkEOY32c6ymNn zaH0A*v_!{^3*7r@l4Yct}g81pU8g>VT^Ap)BlAhaHONgH4A)GLA_$i>QapHdL3%E?9YqsjjUavNi5TWrt zp*qHiB+|-v$a0Qrs6`(_=GJUmB5ZAahC{oF7J*AW{cWDyh=oDbM?VKwq96VcH@nG% zT=;%#z7>k1_w%q@yBi(e0IY9T^HxdHaK?FTN}R|_>E5iMVx|q!{)DVn=_7d~`T$U=zv|x^->|2S;8-xe@Hu^+awm9DdT=UYst8+82W5hcS zR(JhxQk~f$O(v%8Z?}GmDj!rquIHOPS6z9x`q$#r7gpCVV;kp27r9a`w`>>ABU5eC zs+=yn#|f!%P_9%aH5A`hU#*{AQ>H@)Y|rnQKwxqNr6IFdN-db(8+gjtc8WT&MjEoY z!VzF5)fS4tqll%Ts#Lg0qAAVnNQbmeb3`{(?+Vn}vZp~?j=&$I3kr}|@yl&)MxA0;rc1}4FDe9w4#vM-S2uCO zZA9YSg)ytHEUc{0m1t>mF39}zb*+xj`DO|^`?rOFjCo`(dw)=RjfJ&oe0piJs`q&! z`Q*0JnZE=qy4qER?bJFvukLcnp_2 z#lC*ty6WR5tiGx`3)kLbQz`VMH+W93t&yq^D;RGO@EvAT)DnUep=-tThFx@)rMV^80imk|b0Q*Vg?=e=LcV z0HK9V#CR86TINH1*z?(Uu8nvq^GAv^gz|*4gz}6T%E1aLPvs>;#Ef)8hpVg3=OTX5 z`)o8Xe5ibP(up#1Y?4ZjQryA-?8{7AdH2qXe*%ht16h5iq28?oMp#Yvc`;b8qw6|V zMsGoW9!mqMH{~7|UTzD|=R*!egaVkaTj@?>GNWcZ6Mwyi&s@liT@INAjS>b&hb{ZD z@?I_t$8ho-WxECmFf53v#OF-v8x=`QNh@|JP;s32JRK$>(p|+mBeWDQp>fVJpBIoK zVV8TNEc11>sk3*Z+4u0M8HK|@Fm3-iq19RS^!OerkqMiBG7>mYlIXFSLPqzD;ynX= zsmZ;8vGKJ87wHoK=ei*}@g+nnQo6od-$z#I!N*B9tG*cI(1)Ylmv)fZ0d3y zBoaO~4skigBRs}HdCK5)*p5-^^D{>7R3ck{z|PDS-FjfK!lZ(P+MpCJ*N#o_iV8U2 z-*8rREznsOWD7KTB7hWGihPZy$7u}v^g<`_nkAUPAq(tr(yduDbr;U<&v0JZr-=*?c8EceZFXMfIP7f-A)PvrF4-Z|3Pm^g(+l~ zl0e05RohZ?eW$~Jffm@UNzP0pC9i34xIp(T49Ea3e zYDG6LpPg^M9enUo>@Ke?rvVaZ5{M)Njk*{LKdaZqu{e}7ecv}n#f_+^_~3NYY?8$y zJGpjOJ@Rmu$5WlEzWz<=0q4_;HIQl`E?%}7(Lb*b{~d2E#0mE2wV4@yiU0*V2y9w}JhSwc>Z#AlCgG+IIl z&SKQDDqilMwLTVUNh;mnRr~s=YP;;*3*_b28morAcIbot=;C*M;~7vF&_<{GDoI0z zW{3H*8nH4~Us@Yxo8~*mhO|jc-VF>|KuZl8euD(c|)-eBzuX!?*3@@Y#P7 zwF02Pp!4v<8x-&xLeBQ=`LNp9IdYpj$B(fDr2TqFZm%|U z+6PF8Ta)>0b!2uWSMO$$zsRVro>EEVq=mJ%kZWW0tB3+_cdfDP2G?r^jFIMOZJi&` z{sr@HQ}@usFMp_&kc;`8TAHapEcj#sa0JcMD662wv@?|BalrM_Ou$j3@#1WrusZwQ z?qmrruD2vw9gfzzBh$Rr!X@~>`SmN~Be$T&r?nK>DmgUn*YFtm9d*~MsHhNe)c+(b z$!ZwSNO_G*M{uP3%B*AR*x9Bwb=F0Op}&PGPNyz*?!#lQ_|et21d25<3TI@%ns4s%&N06*; zI#67_QszlmXRBRwDR!)i--LK9*#8n}Ig08Oa9FHkaHX+w62lri4LN$|AQo;}R3!E8 z-q~6D6@anlTDFT_w}yuvAtxEw0=&~F8rHcf=4d($(!bua<4c~N{wi6?p@yQI$#5}1 zjj8oDFyMWW%f~ku;9;~x)akDnzCab>8Y}Ul!u{G-8TamF;yIufPb6nUkkHIjGUjGf zBST0QOn$470w8mR9kv}ng`PJXje#)l$~f|9fOMZUge%HoRlSC5oFv@9vg55p;rWcC zdj`1-&IUp=HB-AuA*U~DC*c%GpbKpHthNHGtPr2hawa#KoKcZ99u z5`1bp{v>z>ePHn6*q%W_ww~2f4;rPTim0j@fgyV(+w`%Lf6_T8-cU4@gM=v@j>H&w zc!LOsu4nJWFq75tbWP5jwY^o+Lj1INmf^o{&1w)cxY<~TugOIAN%gM9GB+C}Bpyom z{jzQjTs$$UV;c*6=b!&<2(WVR9Q4Fs=?DBRTN{oA{dfbOZ|&)sFC3+1n&*0ahcv{g z;k9!7O%mm-u!?;x{~Yi=i|s1|5Gd_tm`YbenvAEh(>_E!-?m{{=*P-Zbf{|0^6XZhk`R*QA6-V;(hE zm_(*y=G)8BMz_FRqW3in4e38?Uc!{`(!mCJ>;%yFz+sE7)+75wdv_Y!K-2mPyE5AN z*lY$=esNKq;T088ovG({lp=@%zPLN#D^<&TiHyR~Y#g7)WKP#Etb`U}2SYaqk=Eikr2MYIhfp=*No|#$( z>;LmK079&Dzvlx>FO}xxTw7vbPp2}i2kE_6CIh>ap14eOW2x1>Y( zVkfY=TS((Q8@LHpX!yqP_~r*&_p?HCLQBRjFK%|(hznkVywj@^UJxCbw*HLbgg}M; z7u-KBisf`PFSQlDUa+K)nvJ3E$y2Xs=IEDwvT*z*YEPK8bMrpnZnSlb#b z&S|zb0L8VbhXNtm(it0rKZ@hND4r$BLL`UJbEii};!q+6qm-yalq9sf{9 zFpy5Ves6O(v7@H@k%+b}0oWh+S%TT`7R!uWv?j@?TeMjS+8n+|!}z}OKKc$~!%D6T zJMGcjz$r0ud}7JBZ+A{Mg+kaCZD(f zQ~2O6I0X2Ef9gy}fNY`;P;LHM18^TbfCg|+ukw=q4!eMl?g7-2?|<)&@M>o@8#h?^ z8}J>|Mn48OmcY{{xDke=7&xyBfx;BVW-7+1hFxAo>If(bI~>OejA+!bZ1Q)x(@L{qXd?Ija{aof$NBV z<%2|TF4m!*t?*R%zGJ{H00^*Po5_wn&H0n~C=c%@EczZaLj_tdqZ(I3DNx{fbtbBT zlJ6y_T7s$jd1{ggU#p2{Q;1`Fz+45A(T?;PD;m1t1$HO8AdmqKl^=5L_EHToLQv@81a!+Q#l8czU z{o-#Z2#uK1{agay+%=%#GCBC=snuWsH%QoD|2hYN$4c6FDzDWL(na7{=7Gsv%VXcE zt6}VH-B)n|#|`egi`SPYkiHl?2yNKoHGaEU0#Ow^ps5z(HlYG``W#3u;osD_{!s*_ zyvAjte`VVbz*(;Fy~=3c%Z-E+bn%lx9x&%bY~M>c$_)}$y%lKF$?&!F!XqxTU|(3{ z^1j7jqUp;bT7YR$718hT#b=w(iXDeeyw0X{A&0A-CvZDS8ay=ex6}vTBfcm{)o$D4 zZ6ZZ4iCyu!z&P&mRNzprwqS*l1z-#8G>oMQqNYAcpD=bm@C-Y>yN8R-4rM8+d*U_z zNpN?$wN>P7QuCcFzy18rG@qS(6N9vJF7qC=HKhkYQy*TFX}i>oBn=pi+?4v*|7qWe z`v6<5pxEgh*u?rS7dN4r@!;H)?pXkQVReWR_W=cn40E4xy^1&Cv9N)l&HO~9SPYfUuL;Kf^^|xY=vq7Pp%Kbvyga8J zsp!&e^+zmP=cZ1-&66;Fl<%N;C!*jJ!vmj$SW;*Yh2wo${bAwVqAwabuc{eGIgu^Y zh*t0QN4`C&+p7~bBs9?HFm4e|f2~GC;#N6p8b-Ou)%%#}xxy#wk7URw3JfMQ0d+Uk zHTWuZt(Tk0?}XUMBUrV)IAUo6Kvi;Uw>9X3ebO+DOc(n)1tI83dEmvR=zd6W>exX#4B=iS|$QSoH~_ zrX_V+Ngznt*i%=Cc&Edbd=FMk0UIbcMdH1mZF5_#GIB&T8w zqn!A9RUJN+<&MiO?P$KBXVPh9H8uR6ZO9mCYHK3;cYH5|1`aGoi(Pj`<$zz5e6EhrwJvr@vM-iG`U_ zCk&Vy6s-EVSlf7MHP;V4P-9pcv_===NMAci<lYhh@lZ_f7Fxw7o{!HM*I@oY&R3xBMxeW+Hx?_11P%t=9pSX8 zR&5EIjWxB;#Q_w#wInLL>^af{E?J4-tp@xM&BysWEo!7B2vKzY%Z! zz7V#46O(8wsEwi=I&t-Lr>jlP^#y}}b)Lkqm`7184IOX1r2d}y_)?hfC(_2nhH_y`p8rePc zb7@VeqYk=x=j7I{b!zWf*F~LD(G!-bh?e4IU=Y=EL?%ST zj)M8Vb9AJc?qxv@N3$jOvT#0!zHqv=dUW*--LhL;V-Pf zPvnU)*hyl8ZZ1y<%;~vHx|g*%t3+ag2n2oU(n@$YD`&{5t0k|qM|XsxtoSMtJ)A?@ z4>Q>Dbk#pX4R^wjXpe3Q5eee02`G#1IX);kty+*glDVUifgds*+$4G7*7wuygxK3mEJ?!qba z>G;O_Co0pZIL{~>x=EN3I(TdF{^57gxP}U z3EISMI^$kXNYP-ObJSRiv-Uzj7~fpN40reEmO5oeio?UXFV3xlOfBp zoQJ5X=_sKSBWtR=aEA@dDfO4k$elf8pQ5i2 zi(Ya!*7ABP5QW)gua%+40)Hq%5ay*>sY42}s=_RBkaP}gf(N%d7;|(G_k8$u;BdOc z8p2Sxr=hOa|-r0`n-sc?TK7cQDPFaiDmb4*DQSI z#FC{L3jU3*Fa7(=r9XxYPaEJ9GD1?DlwL-5)%EB~0`~@bm$+yTzsyEw2oV^HElYJ9j^)J3ZWBPk)s$^i+e(isa}0`ypc$7 zbeUOfo&+7IirjOZh%fTtNkpx2fh1q@+VQ3ZBJxga7-!`^(wvViu!NuNDg?siU{wK^ z>$e`B(Ud>2u{cEFK!kPCe})@H;NVVV!Y^*{^oz#!zU&E@RD|Y!tHHtc%)2Kq2B1C<2=;cNnvEtYeuM)53|jcS%Piq%O_`+^W@% z;pAtXldpbU2zW!2Q&e&6L|M5kUpw&&89IVrdcP~-Slsq1(WK%w%Kv6IC1f`hEk6FR z>(;Or*Ta%8+8HKR+k-eOp9x}(vp7f%)jKaTm(7@rR#Yo^2DduFXPQ`^s>2}O%VHAi z6?#Pr__E~*oV5C7#BqhAo3FH5fSe{vryKA6e|}9bZ5j}2f8Kb@M~Pe#aubHiSKlN z)%dJ{1ZQqtc1uscd`GcpI0G7q8%KvA7-f};ctNMvOM5dzs1|Wr<^$SEL9g#uZA=JT z#w8b{F&hhrT>pI2LM&O|QYI0ko0Ra-m;fD>Js{jCg+H}9l5H8SR zbd**Wb<=2y9X)J7T6AW)wSRB3cCPOp5Dv1^%cJo~>=?q3V9{x3aiZ`_%Mm;h`!E?g zbnQB-4ZB{QK!v%`Df`7>T_LA4lJ6bT60jXl7^`b*tAIehk+Z?VL%n4W$GJ||nFa%@ zp`oeqBP&7Y`5LJ&k4={^Iwc?ih+ZwW<|ad5uLE*g5x_YOgQ7rgo9#x&SWdUUTJR&W z%BAshAzM1P#N`i7?s2Ad6q@U)#jZt$5aB%{^XlS-DW0^+`f|jBSoXrv>ofc4u3?=& z`GBL0k!Etrw7tI9OVvC}6^1JhV?8FAn234zyl=49U(5BIRnv;Apq1gDS@dd0gT8O? zsQMZRml!~)r?z+V2$Uj7S+~CBKm^d~YgMbwAFnJ_rEj_1Iq{{79 zc_Yn{mg=Vr@&l$VTiQ2(&wJUlTpqnBb ze}6#lADjoC1qv(OEs=RNf5NOaP~lNBh#sf<6VT0o3J(dK{aerSpfuEVew!g!<^fX)nte6b$112&!kuof9YIXqya=l{Si- zbJdAkDE(fD3ND1|edc(9AchuRBlFuR(hNR^=RMtt;OP~ACaZHsR2E%8SF9guT^p^1 zSw;r`*ymZ-WU^DmTe#_GZr+qh0ji?zKoGAN4N?k@1Nn@X@HZ zHxsZ%1z&0?Bh=k%?7Zg^>HIt=pULep8% ztIF3(N=jveDMCEnxEVd=jGx@#b?{>;qd+0gVJ>q+*OM;2${d#;bijDx*|j`J8*i=s z_vCF#!r%)+`5}Tgeh)Nx{%8fzpbumWJox8-lps)Y2q^T}d`v|7Z^RN%gJIFxKOYtW z{ssh4Ni7sI!v>TD)D>xsge2yE_qPE4eIgYgAbgSsH?p-YoX}7mU5Mea7-VL8YEAPA z8zpXDuD<3Awjl456Tm0DF$PHAgBiQJy3!QT7ezos>6?<0k}gR8LiSD(tA-*J%_RZ3?hf}V?Y`0;SC=%Vf4Ct|js z9)n|?Kgc18wih+w`W04Y+Gkb8JS#kV6wtBF`jHQ9T$RQTGQc>bRJb!IJsA>0z-+G)xm{cPZTDtIUEYZPJ|N&9C;JhEgbJ`Gy{^O9K1+(s0?8 zxipE4az+yfntbQtJ^$w_f+h}-y_i#|FSie*9#;fc`120jVo(>cT7BNGnoEo#uh zKyhavsKKC8-HAx+Ae2Oc%S$qa6ud{mX@5VEBi8NoE?g;EJzDSqCmeIxZcR&K2L&t( zqzU8-trZoQ1h#LR`c=gOwtHyymG*5#oNqp>)BuMk+Yh~GVhe-J?X&QmcuEaFsR>vQ zW?Xx?nUBwcr6zx>CuwYFZB^ofeF)2To0-m6JtTTn=HVm)zGH6$DByXPwJ&vWu}=FL zJI=7@dX+p`v8cBzhCo10)gTMBdx8z_nF|mQ=~z5(?wekIjR(s$UTlH(IxuR7#F9BgM8Km!*FxsWz9_hi>#Z_ zC^nJJxh`p-+xlABG?YkSAsg(TjkaS?9XDOtcseTwG>{u%fxj=v)D>G$=*ztzhkfMC zPqUv<2|O$Ei4w9sPgx`|aqNmWOtESIdA=%TAGTiYn5ox9IWzx(_Oa!H$6od*L2mb} zBxzeEA`**&NU@_Ynx)d0gb79{vl!1j)*=-z86rRldgd%CuH@2?N4#pm2HOCM>F1(HRQy(^APok+gW@Lnk2%r{ix zlo^}9KeX16GfWo$ae*lMxJO3ky_UyXS@g%2czY9SO3{j%?3S6ySVij5GZ7mY4%b8r z!eJ)D)bn!TNE(JarC?v}u695vYz)I^Vd?uMfecp0;e=@Q^L;BrQkV~Ub2cSxV`X7d zxe$`OAGR;!eEua%lrx7j_0++fno@vfkFI&fTZpaj6q)`9ERr+Y42IXJW@V_D_(~We z!O8)prVQp`Y74^YG3Lz{HEdAdQ^@w->BG~1y2za+2%BA`h_~SEb`TW)_(*9rWORvs zFUxh=j5^i(jlEh&;bgP4=$?OOwIPS~)|>swd8(a(de8Tf;an=NUt^Bpww^Om7HE2& zYMkd?pFP^AAn2x{+VY>if?rWF_PF7x@yRlLwcX2V^6wPyU;lEd1;OHgk{bF7+E6HF z`UX>N;y^@*nVge1;Bt&;dR*ojy#o_*?qgF^Uxdv%WZ%jX^D%cXXyQs2y))FPDT_{n z5ucB_dN_+#CbH#~oG(~_sCut@i&f1Y`X@#&iD(%&2JphY&0d|n&(;ro$isL)-_pWH zLX?}COl2~$!cJa{Zr(W_l=I>ikzs@k&ttdV%W4qa{c6Bvpq_sG$tX4Lm+FTf6gfh3 z9swn)`Pr`4T?Y5O-K0hwjo)n>`DJX*uK9TLKRu)? zV4K_7{0P!Yu&l2)rQY~lVBC;*v1asA;(YA$8|u5zxJBqGs8#`hK_c{+pCBxbUEXTc zcLf2CJ3n47vIWCC*YpHJkvVZ*IEqePqbV0QZXS5#=`~ap zX)?5Rq&|5OMIog8-G5;$`9m5n*3pwd5O0YDP&g^C{oE*$1BaK^9fB>Ro~9CC}-U)tWwuqC8&l&0Ds*;r?s#?D`$VH z^I4=u9jo~2`%nbvyH4QL7I)?sGqeu&8#C+oVUjw4&wjP^0L1VLzDf`Qg=- z3gkeuatJsk9Oyo9QS2h=&(wnRzN(C#1ik0;3GXZ0LEL74bMNc$Kb%~DG{meW{>M`{rc!!8{9V?PU9907Z zD8z&RMIlaW6=4yFq;nL8t-J{w3qTmD<(X$%HMgEuEyhq*<5~@f%u=mndJOTKY+p@Q zPZF-CvX7Y^$2jidCGXNHePH?a$=fP~!fda&&4SWzmhpHZbheE$mRWq?PhJ&>@ZURH zWGPS(Nl18}Ae-~Svo4qwBB8<3Hklc6U+L&k0`n>B7jr?efv=T2DnR8%{K#i6ERK#g zb;Cs9a7kQw_Zm=^$&E}1^r2Z1@B8vJV|%k#dzpsyNryY$SU*H~rcjh2rlHxx_!6tX z^k~;BT&oy=V*cti%KTKTbj1A_9Wsjb1JI_^mew6>>SqIcoV#mEVwSDDQ-pHeREkBc z-rd9n+D4yVMclesQrP&Age16o@1Q60RZT|8T7NTBc76q#=gNAz`GkZ5eVwi*_zh8; zc>h7G?5rHn(VMfwu~)L5sfIr@Mb6D@KwwGar5`49>d$4pCn@k%9Lwv+z!aoehP9Hl zX-~iZ15)L^1l!k&)c*5h{^U{067=UIs{23m?b%4W94l_i+ZVJys8_6)M-L-rtl;ml zISNx9jDkzhlpk#i*I;y&;lv}Tg=~&{2$nRGQjXsZD-hE z(T{%&VwD}D6Z5lo%Y`ws6_dTDrPCzmsWNGVT{k!x{eVmTkWjeV6$E`OzO0EA3?H3^T!E4H=>^v!qHZ&eZdOjjXcEt^SR>WQ!Pi3Eq9C_HAQP+!?W`n*E8hLbo zU8w|33ynk|EW*5h36v#CD|^GbsNB*JRJMjY{tv#q%=VS7dky5h7cH4y;e{sqyE7B& zN(;ZNQ`h{cP>LiEem$v3j>1b`O$0CagEdcHuJW$#=dEt2S&jaW_P#PKs_1)H5Cue} zL%IZ{RYJO?krXKz8l(k@fq|hrC6!K5q>&n6kVZl}hmah)bB4GF{rUgj=eghR{c@k@ z-f#1qGjnF|wbovH?X}kX9{EX=?6S(d3}uBqP8fQZbdfS9zE3x=%6R$w4qDdCEO`TXq7Lyym;hK6=;> zDz@x@CI%F&gEf|ySoC1jsY!d4^NDrgMpYPgaog5YtWk2Q6FBAkr;akyYp%U*Gaq#riNL58-}mHF7_FTK3=b(ADSmqGI$>N z)SUC&J$Cj@PhYA9;g>Ken(e%qyFtMp>0VXREnmujb&wojhgqvyAhTWSM33xr68(Ws zJ6ij;)6RH)5oB~8vWZMkr4$&%r~0kV^h%XM8&j2HBKGRQmO}9oEm)sXOTVlp>HVEr zuMu>pH2=Dd5~#1G7n$A`a()Z% z1bNgWuJ%(KcI$uHpuGy*52O*Wz{2~zg-u(@a*q3UPFQ9;lIj_8vDVB+fVD4Ah6Yi# zoD*2ZmI#Mr&BG_T^^(!bjb!7?Sz4hFwl-sCo$^KBO*nADChWusua^prr6tf?V(s`F z#t_pg`~I|@i4v9BqT@8tF=Hn)xG##b5FL6aycEbBSMu0+Ogm;n1~d^33)`my!I={S zVnV@Hsa$6sy+TUjJ%HTVl z=Y`<}S^RL1Pf5-vBzWTu&5})@ReEl!4U#jpGttL$+@AHMS4V^QPS+wgVfkYOKeFfy z)>#H0%yP4}k`|U*+NCU8Dy#_{xPU5jLz4P1swS#?OL1@`Z_Y;6^m3xXxAED^>woNK zIzHMIbaWPysB3jF?7-fNcCvT#J@Vv;qMe%GehyojpB%f_KIXyWY}cs3NyXm?*`7~^ z5%hJ3&m5=K)Nx+H*z1H|j9iU~QbV^gsIT2R)mq;6jt@4u+hIV54y#~Uk1@!1EIb3< z=b+j2UWhHa%!YBLAnwCCj(Sr|jRl5FO?w>P^wJ>&fkd<5mw*kg8lhC$OYNT->cuXn z5^0Anv)3~Q9@D0IfeT6`#s}cb%^?K#yjkkfof-NjzJ%==)TG^x-%o_A7i!%B-P*=B z6kfH^_~;be#;KZrg_u;>S;)^k(YywIZdb^3t%VH6xqqjUpL2a|7yN;KKZJd|QY(1CS zBLxxzS#z-NOGNlmU}I^sZ!lyI(kTcTs-1`l^nt$gk2EaZTK&pAqka)LAM(<0b_ha1 zFf-&r0!b9&Kh>m37olUmWd~GQCr)-*VcsbZND}Xjc!*+}G5?Y#vZz7Eh0I_7b4G z_iclGUX5$#mr`I{cG3?^aJ)OCYn$=CUdU?M{Z2Q{-3N6^8u5`n#S+JfJvxo|XtHc* zWK%Iio`T978yE3;t)>GY{iREH8)|9m9QO8n`)5_SIZtn@8O__+>Qk(BDQk4q(*z_B z=mv!dg-hPQ#(`ec_;qjqLoou`FcY-PwwXyYT*>*O>|qEP|wf4Z)@MR;4aPR zLL2x9tqI`ezCcrtAhhM0|%#+AIJ)GK#d?Y@;K z3r4F?vr0ytZzzi|=otIyi!0+kX}S}R^Gqw+2soi$Y}(M%7F3c6VWKR3MQBkocbKtCwZ4Sh$Z2 zLuWwy)xdz0@nQyc;7le-NZqVE(_(9Q?^mTUG{MEP18H+oYFf4~mfQ%NpspdOQ++ z13_jg%yN+z%Md&-07-gOUWU#Mm@Uk;ncP)osKQju?F~vPUkhK}24%06B&6sr^W<+g ziR!k6-(&MK|8PqiJQN#g*Z79w8;?bk{@21=o(XEtO)LrvBYp>+NV@ymo-W^3Seefq zqTznF`pzqywaGH?U7?4AVVucGZXy01ag6n7m44Z?z6N)fSaj`MmcHedc+NLnS^H>| z^q|V!z)Q|H5uvRq4T*>mjc<1#c^xNqO)>YLG@4Yig2a5cn4v&rq4v2J0j3vwEhI3h za6t@TMPdxFH*F@80%G6Q^;d*B(S%Q)!JQl60&myntDE zi=Lt(DcHwztVeFQ!e8RM8I+^rwqt3g2e?o2_=0wDEnfT4E+a+P19p~@zSIuA)89Ppdl2?RC0|`)&8p!2G75-DCIT6i9~$!q$t5Qh@Y4!=NXR;I z8{il(7ge>IIQ1Ej@=L#( z(@iwf39wR@O?+%(o!bn)<%3Y+B`DrMY$ZLye;TfW*Q4bm#L->4GB^I_X}?h1!qf#{ z0iG|O>iY337P=w1uH2RbO>fsseSU`uAKWK0$3QA&y)t&g=;2iFS7Jjxk<`>2NCWudgMe3?+&}>KuYnff0mh`;-=uGLm@&TcbCh;1> zG`s~P7N;_d-UEaCjL=qifS4Z4`t8szZ0DiZh(4z3X;d1$(K_2s8U79~v8J{+dh3K) z`q6I2>V3{N#0M(Vq06>a5yabJ2`Kwhs%5dTxn~ixhg&nYGm>0o&&!@|VsMTT;!FYc1jtZZzI5;LpHsn(i+XJ@HNU-F3V=ayK|WOvmw9!7V2$wdq$&X z>L9Nso2ItfU>q_FU2-6h95tt=xV-xVdb|H_ZtfF`e4Z)g(5v?&CprNhF7NvyE7HN*!FE%pKz`x z-Go1dwF3zymrVREX6F>gevaaGj?}hC(s{KE0t$M$Z9BLq{y;dF(})I7(>_^k0=?R3 zlN6(n*wjSRkVMG~=hP)g-V{hJ=OE*LQ(3*ULP>RK-H5(if^vf&rx zXRfq1$kx|g3+R@rA+?2D0O(&YlFZ+7vu zvo|*0NMYP~sWA4^Z8*sH&ck}pBf-4Eg!ph?9~tJ0jXlVvZlpl$kDlQ-PwqTSYV=Pn z+{3n{DnR z`HYV9thC2Du;TsAkJ2iZ2;QKKj+~S95(yyg7m7hQEhIV(q?CS||IC|_au2ATy(=jx zaXi_YDy^@te>DuEjf6ThdCn7wU#0u=bk1DAn%Tm)%ckC$ca6GhH0l*GbcUjoyia+l zD-Tn28`?B9zKs?-|Cw4ppthDfEM#`n&di&}1McT7-|c(Ri@Upt-!*gXxRoUovge%i zRm6t4;LX56_ow}k5gS9Bh#xd7&*rG1cmPjnlIwt1dtJyFK++;Krfk@?^Vo^s zXC2RnZSky0znK$GV*Wsx>0*xD@Gy4I@oXWE^unvui|Za%0rjZ!GOfW2eFNE?t*)gD zChH3pH+`1~<3`!!ePjru~#e#h8=>G7W^+k3AE+e@F~y(Ig31R`>b4x#ec zU9A;6Yoeu<_;y$WO%d?bT>u&#)bVW$%@kzQ1d+C(L$rqww?@4wW$FSlHIa-{K z;hu+2ycLn>pswBdCz{QZo*j#Mtt#t_M|+^+xu$3*`lAg8!F>_q5HdlAaLoP|EjFR2 zYhRa++G_D;;4ZU8AYx+|+)WZn)=?_`uFOP-P7-qCrtYC zX+HmyjB8BM4w*q~vvtz4{`Pki=;KSnAy*3E7@6LL4vXO()@_*aT4uXR`lh7Z*)KTx z{Oh(=#+NkT(T?ImYHnidjo;;pRfo1sbMpgt26KfmU1WO`1bPTA-Q2V$V608O#*%8h z=W=3sjGUb$4j%C*0Q}$tP?I5itZ$Z1ahXhTsNh(~!`^U* zT6arzE06LMxNFn&V#XZX>(7B-nPo8`~qPEc9sa6;a2d$6=Ngq3+epDy_XJZXh)r z&EKYl49$AyS{&y*C2TyXc6qn*PVX~53ooinJ)PxmHPu*xF&tkOc;v``!a57ZATM=8T8c$<> zVsd(~TJC>I|JJrA%Lt!7C`ee-`h7)Y(w%v$S3vYs$o@@4_ybJ?gJ*Rz@}K#w-nO{k z&+;q&n*i2BRCm5@<*%%Y_DP(Wm{jmgk5BKhmv$(hA5q?+5aH$MBP}%9uWtfh>KJar zb5eD@b~O-3KT6%w5mW3NpAP-VFdZmDSWP+Baub}V+{=m1&E?t{}bT&e6ec$u?K|a*KHaIjP`YdMU%b9WKsJ zj}{+?jb6&wd~+MXN%^+@4rgA7l-HT87|~?Vq89919^K=|_~YK*upCf`rWI^kS4#}8 zs(jyyv=g-jFTAI!pcSNauw#_UCIuRJNNHm%qou01r7v1fW9w$075k1kLNq37$YpM_ zkd}Luc;DjKN7kPvrxfiSX~^q%CAM$GRvh8bv$@IsC zjMD#Zy4D)s--`&x7&6@{u_2e~vZ(ySK35c?ak&j0L;#Hil`hq^4`zf~`KhX+Z+#;= z-IH?y3Sk=25v+zd?U{a4K^(R~CTFW5`rde$-;|%F6M8a;NC_#DbdC2k!$|Jizg`bT zJRo0c;1%!w5BCEQ4&juyyONRp19>rU15>wISDOCU1uSI-zh0zpMxYl-RaaB?jiZY_Eo+TlS4->f8H4uS6VvMm@&?@b_!KfaaenmbXb;`3>K3?ImE_*6c-zNrcSd{)lSa?3XF&$Sz! z@&jZx#rr-|`2Q5#O$jKtQzU)$Uys)y7*KHMDTcsJ!`BUPkHH*J@Wah_sd)dvY)LHv z{XA6j*rWgH3cKYW1t174i`#5&VAp@X6b(skFP3I zaU!AJei!0G#!qS-;4J|PfAr6M3<}u(TFwCOca-+C!v;=g%Sb8*1b2=@2N6S@96)3= zWJOX*z&%}CT~q!2Ecma2ngEKMs`B~Qm*ciW@RI71f)3vB2i)+J@sDP91prRx8&<@W z+;fR<_HTs%o%$OP;#|YY5v6kVZUa?%OI+|BvRo;gc7Uv@EOJUFr#gn@kKzH6H3RS0 z!B8h|wJvQU18>e^&7tZTU>I(gnhiL+)*1NQp*OHVJj{jpsWo@*9y`6vq0T928^?rXRUsA39$BO-@iQew(TQ7PD&1g6PLj@4+t&&(7 zvE_gHZmSvMKFu51z2#Gp4~z5A$;CQ;1xi+LddQvC5iQ) z;n8D2JvO^i;~5A6)IdeaL$aHGxHvV;{&joulK-Yr{pZsDE#tom_+PI28xX%Ol3yhd z+xuZT)*bpk0VZg0X#k=D`ckVVln!UrSM991ir#YHpJ3SWt z8&W|)atFZ52f))7{(eCzo|O zT@k6?8PMf%EX*zDuIO&OiW0+Gn9#iUv#b#Ozpm`R7;DWGvJ?02>7KMu2-Ra7zoB~J zvwXI&)4HU9mH9NBFG%kr>0k1Te^`{*Ng{S#Fj-Muqq#w@j4J3!KLzuDr0S&tz?BUH z(&;%dn}|R|As-1xF3R`zSV&It@+WU>?v}qY;I%j&$)og$zkJ?{5F89)BQn!``iJ}1 zPf8uQ>7^WVNTnu~t0KKA#nNv`d!Kf^$PA~#;KL*1n}JW=K%_xgZ14`0dkTEv(Sl`G zJWlhN-sWAVcH5U9FK(Q2K0wgGb?K8{M6T?OPxHLomW~&nQ(5Dfg_PK=QDBNBKK%yx z$^Zf6I<0#Hp^QEN8lN1t$;R^Cd}@gIKHe^YNqVm9FYzR*RTk@k;D zEz)lRyYmu-)Nm>nEQ&Q&K<4Y)d8_BBraqzL4XlmO@JBfN-{d*(~nF;wNet9$^Jrg6kv-Srw zIAuxUF|5Q^50F*33F{J33pu(1-RC^uJ|=7eI-2D?v5q1l>1_yrNDau16+2KA@&-Hbe zqK;WUCtod{XID!$%|<@Y^fKJOdmpFq%WUQgOuz_+03!(Bf!pgy2wmIB{ghWx{Q2He znyD8nY0(}ro~(wX1(w?B@5X3yvivdl0bUG*k-VQb>-qPgw47lLLATk)>npU8ib^D+ z+*G#ErFI?)5%M|$sxAwy7f-?~c)b_!c4X;MaiRe9R zEw2L3#MjO}Y+M^$*D{--t1FF^E5B~13BR|>vA@^(^I2~Bh?;2qt~XNj&d8cPbSYZs zZr1~`J*m^H)8=Lq*v|wu7X#aAEIdt*{=S527tO}rIvqI zC3j@!*x)TVM4AH}B>AzxQuOgj-5@6;R_dUl0e^4w+i=j3Am@Lh&!Kmz+cUz&5#*3aeL(m9pgmacl5+>KOK5;um{u(!?u=HMLFsUuiIJ zT+nxtw|(~~ziSux=&+fP?5S9$Q?qUnDyD80c)lqJh}mp3YgzT5+Sjt-jt=KT#2%sm zP|yZHQiZNwX0dmg!Swumz<16)BZS%5YrWtC`iMI9?5f1cr7%j zZsP9yv7LHk=I=WJ0(LHk7aV<`Cw{JU7{I;j)1Fz!05d>i?q*(vL#hJ~QUPT69+N*P z{R{uvBckK95Wkfq7Qh0IZ}>c~`W{zcWAWz$R_I2S^)PdUOhTBeSlP#22Du4-zBVJ% z_Ayx=Ndwyq`@Jn48Mz6N;*|Z;-OkwcwoVB@KM%}_1`yD&XTP4e&ARk*OBc^2-zTsM ziFa@K3Kichu1fc8VM%AX&(W_gn+h=Q>IUTWzs|KSYo<)&0 z$8#~9&0ScIX*W?@4R`Taehn#cTCZlq0)rmq0mPn}UPaoC?njFOAsz}*-Qtvr={~5D ztewQ2VHI9Cg9qYAgFLczUYt6flIP0_8NWX0?k^{(yR)UglPLsmua%7B@iy{>QRHdn z$=@mNGxXZ7zHV+h#I4BdS$ucr?wu!;jEaP;{FHGxG9e7a_(6V-WM&stB$`T{CW$ZZ z>gEQ-^~r05hGWRK|7H(3^T<(iccjrI4B84Yam^8J`OVd@n{l}6di<>jEmAfQxkQQX zY3bDNG`VeZhF>=r%pv`tov(3`t-3 zPRx^NrXN7NMb)pQ0(##IXSsChCwBQi)+)DN_8{wD#tcOyMLK77+!xN|=~+^YqFY=~ z(gl4T@%kuL9Hvx;GH^z)J4}fVKGb#a_gd>a-EavNeONO~cRmDi@KG&~xFccWTq&_U%zG z+W`6ijlvup4D!rR@f8lNAbd*U@VJegAcw5=8~nr7 z&jFGSOGht;aY^?U@wIWPedX&ClRTC^dY;G^|gBXjH@ylLQE=E4C0mr5} z!RtX!P{95;s-s9qt?fKym9B75C`U!GIQ@X;Ip11leyym-PVo+)3=zkeqdwrQ}TyF0j|hT>}ktSAWhwk#%98=qt-hGnQI(~_Z`4Vdm6zwX2*<@_x;26 z%wT&r^k0h)?WjEx6Y2z`uo@jO1)j0o>Ett)RDX#{{1G4XUisA7gP7dlTbU1E=uEg8 zJNA$+@%3)YTC8}574eu>KM(1E4_sajRBPBp67^m_z2d=0-c&=0A<;5@e803>dLtUb z^qFebm1~JmsLe9$GGh_(Wb?^M7H_Tbq=8L<+2a`VR=^@NU6k{aJ3X-<1wKQjdTjLK zr%iX`?oiG@#v3p_er(sfE%58{R*khG4+d@`k%nKAbLDs<4wqW=aaT_ec=2P$YdUuu zk#M;R0kJ@+7~3hrJw|GC`eQ@MY<`daj2}f5SpX&HK9h>iJmnm~+a8gKH<{#^-+#mq zz*FpcUte|kaLu|prulL_GdAWLMDMkh*1>y3zh8AAL+sDQDvGps@% zb_9F5%w3{G21!iaW_5cxqlLpa%#s+El?B8h3~)*|jNiIuERy(bqV1$c;U8?7xOmya zDxWj?DUwHiDCRHp2oq_GagaWkIutCXwZxG*NZU7;WBO@GRGc5rfPZJM0&qx0-R~9E zzUvhO-1Q!>5VcNDkm41M5IIw9DtkNyGu*?8gT`!C? zV7@}}@rO;4H?ewoj$z!t&Nb;|c&3*nK0=ojGKdNEq9Cu$e}EMP;lJv;C&`h3YN95L zU6#c`fIoSZ<|GO1N6dE;6+*!&-Mw-KAz?{oBCgk0mzo8ql%DV}VHB}G+-dU3-ePK$ z;R=Y?%JL0dtT!|zmirL!$TSd$?cW+$oM5$hzj{%lCv*^K$^1@otE6_WC}i8o6JY%F zip&K|>=r3WVOG8T@wj{HN@CN?^`eC>L5zvLU56qLCqm(@yojixT_Of?T|>F_0zs(e zytNSv0gsxMDVECCInP-~HK=znL75naOs!Lver=u{$XG|o@_NG1wT~52B9$#lw zrZ3h8xO_fCU=>syi_^RelxkmK2vU}#Y-LlIbD>=<9`ZpxV&b{~B=jxkHrup&$;Z51 z-sX#^begUfk)NFL%DjJ#LTin59s!sk3#h zpb+CL2q}zYxPX^M)JP4&!{5amowgC_B>h}nr0^T zK{79f+Nfc@tm*qIyb8RLqVa~>)#bQFSY~Ik!n`^qs&$57LacDzCi3y?9rIX#XXzLS zCV4<_t|*eh)s(w@<_otBHMfD~KDiCMuTE=7E>Ng{h+a7yk7;DYz*PsFT{WYVYoof{(X16&_O1m`M48 zX+Mpd?{*S1J7C$c@97N@?3?C?Fszo^k+JR@gX=yT|W)-ThI`VOGVVPx>y5&@4S zus_B5VXX%Ku*KmkWI!%8N4xElhfy`sIAoTv25C9ze~a6J7?^)-w8#hWmm*J`UDc{gIHTh8Ht=^XDZ$-AT(p_%mszj);WrnS`GRjo^hx6VF7V<^-C#? znwsW$7>!T6ZnR-g36|9Zm;#OM&4Ym3#F(9aNg9>E`iR1$JYML%-*~4LxKb}B+4=M< z5=HZ8lzM1jA=zD1mm-8H1=xLOvcs4o9}hX7I{5y12H!kz`|#2Sd9`ik)MOQRY1<6O z+CQ~~8E-pJHUs~Gfm{3%*u1OHXKCD+{Me$t{!j|%3)~^`R(}NCQ#6soy5DL{BuL#) zjDbZdsH}0N+~qWQsr++F?ayqVVu+i33PIW>OJTK^S!qDYjTuqnTzuHT{tL)Z`%yNw zebJfE*>hZmh)J4HXI$}Lu8_^w)Yhlfq5awD3a093H{aV#_QSNFg`&>AodXW0p1B}f zvv>*$^;E}i-YUq8s6$oJ|D9|zFq_R&#K3l%sQB~RtZ|A2Ku&x@SWy^hY3l2df7AwC zDSr(F#^XIHF6n;-iGkaP?_cp!8hU+s%C;I3!OQe#c>iwx``OKe|0?lDF8}}IjSc=E f$w99f>+05L4#w;4UheUm7uzezsl6zbF$wq|c*1md literal 0 HcmV?d00001 diff --git a/docs/solutions/ha-setup-apt.md b/docs/solutions/ha-setup-apt.md index 0eaadad8c..a32ee5b05 100644 --- a/docs/solutions/ha-setup-apt.md +++ b/docs/solutions/ha-setup-apt.md @@ -2,48 +2,76 @@ This guide provides instructions on how to set up a highly available PostgreSQL cluster with Patroni on Debian or Ubuntu. +## Considerations -## Preconditions +1. This is the example deployment suitable to be used for testing purposes in non-production environments. +2. In this setup ETCD resides on the same hosts as Patroni. In production, consider deploying ETCD cluster on dedicated hosts or at least have separate disks for ETCD and PostgreSQL. This is because ETCD writes every request from the cluster to disk which can be CPU intensive and affects disk performance. See [hardware recommendations](https://etcd.io/docs/v3.6/op-guide/hardware/) for details. +3. For this setup, we will use the nodes running on Ubuntu 20.04 as the base operating system: -For this setup, we will use the nodes running on Ubuntu 20.04 as the base operating system and having the following IP addresses: - -| Node name | Public IP address | Internal IP address -|---------------|-------------------|-------------------- -| node1 | 157.230.42.174 | 10.104.0.7 -| node2 | 68.183.177.183 | 10.104.0.2 -| node3 | 165.22.62.167 | 10.104.0.8 -| HAProxy-demo | 134.209.111.138 | 10.104.0.6 + | Node name | Application | IP address + |---------------|-------------------|-------------------- + | node1 | Patroni, PostgreSQL, ETCD | 10.104.0.1 + | node2 | Patroni, PostgreSQL, ETCD | 10.104.0.2 + | node3 | Patroni, PostgreSQL, ETCD | 10.104.0.3 + | HAProxy-demo | HAProxy | 10.104.0.6 !!! note - In a production (or even non-production) setup, the PostgreSQL nodes will be within a private subnet without any public connectivity to the Internet, and the HAProxy will be in a different subnet that allows client traffic coming only from a selected IP range. To keep things simple, we have implemented this architecture in a DigitalOcean VPS environment, and each node can access the other by its internal, private IP. + Ideally, in a production (or even non-production) setup, the PostgreSQL nodes will be within a private subnet without any public connectivity to the Internet, and the HAProxy will be in a different subnet that allows client traffic coming only from a selected IP range. To keep things simple, we have implemented this architecture in a private environment, and each node can access the other by its internal, private IP. -### Setting up hostnames in the `/etc/hosts` file +## Preparation -To make the nodes aware of each other and allow their seamless communication, resolve their hostnames to their public IP addresses. Modify the `/etc/hosts` file of each node as follows: +### Set up hostnames in the `/etc/hosts` file -| node 1 | node 2 | node 3 -|---------------------------| --------------------------|----------------------- -| 127.0.0.1 localhost node1
10.104.0.7 node1
**10.104.0.2 node2**
**10.104.0.8 node3**
| 127.0.0.1 localhost node2
**10.104.0.7 node1**
10.104.0.2 node2
**10.104.0.8 node3**
| 127.0.0.1 localhost node3
**10.104.0.7 node1**
**10.104.0.2 node2**
10.104.0.8 node3
+It's not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other's names and allow their seamless communication. +Modify the `/etc/hosts` file of each PostgreSQL node to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: -The `/etc/hosts` file of the HAProxy-demo node looks like the following: +=== "node1" -``` -127.0.1.1 HAProxy-demo HAProxy-demo -127.0.0.1 localhost -10.104.0.6 HAProxy-demo -10.104.0.7 node1 -10.104.0.2 node2 -10.104.0.8 node3 -``` + ```text hl_lines="3 4" + # Cluster IP and names + 10.104.0.1 node1 + 10.104.0.2 node2 + 10.104.0.3 node3 + ``` + +=== "node2" + + ```text hl_lines="2 4" + # Cluster IP and names + 10.104.0.1 node1 + 10.104.0.2 node2 + 10.104.0.3 node3 + ``` + +=== "node3" + + ```text hl_lines="2 3" + # Cluster IP and names + 10.104.0.1 node1 + 10.104.0.2 node2 + 10.104.0.3 node3 + ``` + +=== "HAproxy-demo" + + The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: + + ```text hl_lines="4 5 6" + # Cluster IP and names + 10.104.0.6 HAProxy-demo + 10.104.0.1 node1 + 10.104.0.2 node2 + 10.104.0.3 node3 + ``` -### Install Percona Distribution for PostgreSQL +## Install Percona Distribution for PostgreSQL -1. Follow the [installation instructions](../installing.md#on-debian-and-ubuntu-using-apt) to install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3`. +1. [Install Percona Distribution for PostgreSQL](../apt.md) on `node1`, `node2` and `node3`. -2. Remove the data directory. Patroni requires a clean environment to initialize a new cluster. Use the following commands to stop the PostgreSQL service and then remove the data directory: +2. Even though Patroni can use an existing Postgres installation, remove the data directory to force it to initialize a new Postgres cluster instance. Use the following commands to stop the PostgreSQL service and then remove the data directory: ```{.bash data-prompt="$"} $ sudo systemctl stop postgresql @@ -52,7 +80,7 @@ The `/etc/hosts` file of the HAProxy-demo node looks like the following: ## Configure ETCD distributed store -The distributed configuration store helps establish a consensus among nodes during a failover and will manage the configuration for the three PostgreSQL instances. Although Patroni can work with other distributed consensus stores (i.e., Zookeeper, Consul, etc.), the most commonly used one is `etcd`. +The distributed configuration store provides a reliable way to store data that needs to be accessed by large scale distributed systems. The most popular implementation of the distributed configuration store is ETCD. ETCD is deployed as a cluster for fault-tolerance and requires an odd number of members (n/2+1) to agree on updates to the cluster state. An ETCD cluster helps establish a consensus among nodes during a failover and manages the configuration for the three PostgreSQL instances. The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command. The configuration is stored in the `/etc/default/etcd` file. @@ -62,30 +90,74 @@ The `etcd` cluster is first started in one node and then the subsequent nodes ar $ sudo apt install etcd ``` -2. Modify the `/etc/default/etcd` configuration file on each node. +2. Configure ETCD on `node1`. - * On `node1`, add the IP address of `node1` to the `ETCD_INITIAL_CLUSTER` parameter. The configuration file looks as follows: + * Back up the configuration file - ```text + ```{.bash data-promp="$"} + $ sudo mv /etc/default/etcd /etc/default/etcd.orig + ``` + + * Modify the `/etc/default/etcd` configuration file and add the IP address of `node1` (10.104.0.1) to the `ETCD_INITIAL_CLUSTER` parameter. + + ```text ETCD_NAME=node1 - ETCD_INITIAL_CLUSTER="node1=http://10.104.0.7:2380" - ETCD_INITIAL_CLUSTER_TOKEN="devops_token" + ETCD_INITIAL_CLUSTER="node1=http://10.104.0.1:2380" + ETCD_INITIAL_CLUSTER_TOKEN="percona-etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" - ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.7:2380" + ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.1:2380" ETCD_DATA_DIR="/var/lib/etcd/postgresql" - ETCD_LISTEN_PEER_URLS="http://10.104.0.7:2380" - ETCD_LISTEN_CLIENT_URLS="http://10.104.0.7:2379,http://localhost:2379" - ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.7:2379" + ETCD_LISTEN_PEER_URLS="http://10.104.0.1:2380" + ETCD_LISTEN_CLIENT_URLS="http://10.104.0.1:2379,http://localhost:2379" + ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.1:2379" … ``` - * On `node2`, add the IP addresses of both `node1` and `node2` to the `ETCD_INITIAL_CLUSTER` parameter: +3. Start the `etcd` service to apply the changes on `node1`. + + ```{.bash data-prompt="$"} + $ sudo systemctl enable etcd + $ sudo systemctl start etcd + $ sudo systemctl status etcd + ``` + +4. Check the etcd cluster members on `node1`: + + ```{.bash data-prompt="$"} + $ sudo etcdctl member list + ``` + + Sample output: + + ```{.text .no-copy} + 21d50d7f768f153a: name=default peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true + ``` + +5. Add the `node2` to the cluster. Run the following command on `node1`: + + ```{.bash data-prompt="$"} + $ sudo etcdctl member add node2 http://10.104.0.2:2380 + ``` + + The output resembles the following one: + + ```{.text .no-copy} + Added member named node2 with ID 10042578c504d052 to cluster + + ETCD_NAME="node2" + ETCD_INITIAL_CLUSTER="node2=http://10.104.0.2:2380,node1=http://10.104.0.1:2380" + ETCD_INITIAL_CLUSTER_STATE="existing" + ``` + +6. Configure `etcd` on `node2` using the output from adding the node to the cluster. Edit the `/etc/default/etcd` configuration file on `node2` and use the result of the `add` command to change the configuration file as follows: ```text + [Member] ETCD_NAME=node2 ETCD_INITIAL_CLUSTER="node1=http://10.104.0.7:2380,node2=http://10.104.0.2:2380" - ETCD_INITIAL_CLUSTER_TOKEN="devops_token" ETCD_INITIAL_CLUSTER_STATE="existing" + + ETCD_INITIAL_CLUSTER_TOKEN="percona-etcd-cluster" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.2:2380" ETCD_DATA_DIR="/var/lib/etcd/postgresql" ETCD_LISTEN_PEER_URLS="http://10.104.0.2:2380" @@ -94,35 +166,45 @@ The `etcd` cluster is first started in one node and then the subsequent nodes ar … ``` - * On `node3`, the `ETCD_INITIAL_CLUSTER` parameter includes the IP addresses of all three nodes: +7. Start the `etcd` service to apply the changes on `node2`: + + ```{.bash data-prompt="$"} + $ sudo systemctl enable etcd + $ sudo systemctl start etcd + $ sudo systemctl status etcd + ``` + +8. Add `node3` to the cluster. Run the following command on `node1` + + ```{.bash data-prompt="$"} + $ sudo etcdctl member add node3 http://10.104.0.3:2380 + ``` + +9. Configure `etcd` on `node3` using the same approach as for `node2`. Modify the `/etc/default/etcd` configuration file and add the output of the `add` command: ```text ETCD_NAME=node3 - ETCD_INITIAL_CLUSTER="node1=http://10.104.0.7:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.8:2380" - ETCD_INITIAL_CLUSTER_TOKEN="devops_token" + ETCD_INITIAL_CLUSTER="node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380" ETCD_INITIAL_CLUSTER_STATE="existing" - ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.8:2380" + + ETCD_INITIAL_CLUSTER_TOKEN="percona-etcd-cluster" + ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.3:2380" ETCD_DATA_DIR="/var/lib/etcd/postgresql" - ETCD_LISTEN_PEER_URLS="http://10.104.0.8:2380" - ETCD_LISTEN_CLIENT_URLS="http://10.104.0.8:2379,http://localhost:2379" - ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.8:2379" + ETCD_LISTEN_PEER_URLS="http://10.104.0.3:2380" + ETCD_LISTEN_CLIENT_URLS="http://10.104.0.3:2379,http://localhost:2379" + ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.3:2379" … ``` -3. On `node1`, add `node2` and `node3` to the cluster using the `add` command: - - ```{.bash data-prompt="$"} - $ sudo etcdctl member add node2 http://10.104.0.2:2380 - $ sudo etcdctl member add node3 http://10.104.0.8:2380 - ``` - -4. Restart the `etcd` service on `node2` and `node3`: +10. Start the `etcd` service on `node3`: ```{.bash data-prompt="$"} - $ sudo systemctl restart etcd + $ sudo systemctl enable etcd + $ sudo systemctl start etcd + $ sudo systemctl status etcd ``` -5. Check the etcd cluster members. +11. Check the etcd cluster members. ```{.bash data-prompt="$"} $ sudo etcdctl member list @@ -131,9 +213,9 @@ The `etcd` cluster is first started in one node and then the subsequent nodes ar The output resembles the following: ``` - 21d50d7f768f153a: name=node1 peerURLs=http://10.104.0.7:2380 clientURLs=http://10.104.0.7:2379 isLeader=true - af4661d829a39112: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false - e3f3c0c1d12e9097: name=node3 peerURLs=http://10.104.0.8:2380 clientURLs=http://10.104.0.8:2379 isLeader=false + 2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false + 8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104.0.3:2379 isLeader=false + c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true ``` ## Set up the watchdog service @@ -216,88 +298,86 @@ crw------- 1 root root 245, 0 Sep 11 12:53 /dev/watchdog0 $ sudo apt install percona-patroni ``` -2. Create the `patroni.yml` configuration file under the `/etc/patroni` directory. The file holds the default configuration values for a PostgreSQL cluster and will reflect the current cluster setup. - -3. Add the following configuration for `node1`: +2. Create the `/etc/patroni/patroni.yml` configuration file and add the following configuration for `node1`: ```yaml - scope: stampede1 - name: node1 + scope: cluster_1 + namespace: percona_lab restapi: - listen: 0.0.0.0:8008 - connect_address: node1:8008 + listen: 0.0.0.0:8008 + connect_address: 10.104.0.1:8008 etcd: - host: node1:2379 + host: 10.104.0.1:2379 bootstrap: # this section will be written into Etcd:///config after initializing new cluster dcs: - ttl: 30 - loop_wait: 10 - retry_timeout: 10 - maximum_lag_on_failover: 1048576 - # primary_start_timeout: 300 - # synchronous_mode: false - postgresql: - use_pg_rewind: true - use_slots: true - parameters: - wal_level: replica - hot_standby: "on" - logging_collector: 'on' - max_wal_senders: 5 - max_replication_slots: 5 - wal_log_hints: "on" - #archive_mode: "on" - #archive_timeout: 600 - #archive_command: "cp -f %p /home/postgres/archived/%f" - #recovery_conf: - #restore_command: cp /home/postgres/archived/%f %p + ttl: 30 + loop_wait: 10 + retry_timeout: 10 + maximum_lag_on_failover: 1048576 + slots: + percona_cluster_1: + type: physical + + postgresql: + use_pg_rewind: true + use_slots: true + parameters: + wal_level: replica + hot_standby: "on" + wal_keep_segments: 10 + max_wal_senders: 5 + max_replication_slots: 10 + wal_log_hints: "on" + logging_collector: 'on' # some desired options for 'initdb' - initdb: # Note: It needs to be a list (some options need values, others are switches) - - encoding: UTF8 - - data-checksums - - pg_hba: # Add following lines to pg_hba.conf after running 'initdb' - - host all all 10.104.0.7/32 md5 - - host replication replicator 127.0.0.1/32 trust - - host all all 10.104.0.2/32 md5 - - host all all 10.104.0.8/32 md5 - - host all all 10.104.0.6/32 trust - # - hostssl all all 0.0.0.0/0 md5 - - # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter) - # post_init: /usr/local/bin/setup_cluster.sh - # Some additional users users which needs to be created after initializing new cluster + initdb: # Note: It needs to be a list (some options need values, others are switches) + - encoding: UTF8 + - data-checksums + + pg_hba: # Add following lines to pg_hba.conf after running 'initdb' + - host replication replicator 127.0.0.1/32 trust + - host replication replicator 0.0.0.0/0 md5 + - host all all 0.0.0.0/0 md5 + - host all all ::0/0 md5 + + # Some additional users which needs to be created after initializing new cluster users: - admin: - password: admin - options: - - createrole - - createdb - replicator: - password: password - options: - - replication + admin: + password: qaz123 + options: + - createrole + - createdb + percona: + password: qaz123 + options: + - createrole + - createdb + postgresql: - listen: 0.0.0.0:5432 - connect_address: node1:5432 - data_dir: "/var/lib/postgresql/12/main" - bin_dir: "/usr/lib/postgresql/12/bin" - # config_dir: - pgpass: /tmp/pgpass0 - authentication: - replication: - username: replicator - password: password - superuser: - username: postgres - password: password - parameters: - unix_socket_directories: '/var/run/postgresql' + cluster_name: cluster_1 + listen: 0.0.0.0:5432 + connect_address: 10.104.0.1:5432 + data_dir: /data/pgsql + bin_dir: /usr/pgsql-14/bin + pgpass: /tmp/pgpass + authentication: + replication: + username: replicator + password: replPasswd + superuser: + username: postgres + password: qaz123 + parameters: + unix_socket_directories: "/var/run/postgresql/" + create_replica_methods: + - basebackup + basebackup: + checkpoint: 'fast' watchdog: mode: required # Allowed values: off, automatic, required @@ -320,7 +400,7 @@ crw------- 1 root root 245, 0 Sep 11 12:53 /dev/watchdog0 Following these, there is a `bootstrap` section that contains the PostgreSQL configurations and the steps to run once the database is initialized. The `pg_hba.conf` entries specify all the other nodes that can connect to this node and their authentication mechanism. -4. Create the configuration files for `node2` and `node3`. Replace the reference to `node1` with `node2` and `node3`, respectively. +4. Create the configuration files for `node2` and `node3`. Replace the **node name and IP address** of `node1` to those of `node2` and `node3`, respectively.. 5. Enable and restart the patroni service on every node. Use the following commands: ```{.bash data-prompt="$"} @@ -404,14 +484,14 @@ postgres=# ## Configure HAProxy -HAProxy node will accept client connection requests and route those to the active node of the PostgreSQL cluster. This way, a client application doesn’t have to know what node in the underlying cluster is the current primary. All it needs to do is to access a single HAProxy URL and send its read/write requests there. Behind-the-scene, HAProxy routes the connection to a healthy node (as long as there is at least one healthy node available) and ensures that client application requests are never rejected. +HAproxy is the load balancer and the single point of entry to your PostgreSQL cluster for client applications. A client application accesses the HAPpoxy URL and sends its read/write requests there. Behind-the-scene, HAProxy routes write requests to the primary node and read requests - to the secondaries in a round-robin fashion so that no secondary instance is unnecessarily loaded. To make this happen, provide different ports in the HAProxy configuration file. In this deployment, writes are routed to port 5000 and reads - to port 5001 -HAProxy is capable of routing write requests to the primary node and read requests - to the secondaries in a round-robin fashion so that no secondary instance is unnecessarily loaded. To make this happen, provide different ports in the HAProxy configuration file. In this deployment, writes are routed to port 5000 and reads - to port 5001. +This way, a client application doesn’t know what node in the underlying cluster is the current primary. HAProxy sends connections to a healthy node (as long as there is at least one healthy node available) and ensures that client application requests are never rejected. 1. Install HAProxy on the `HAProxy-demo` node: ```{.bash data-prompt="$"} - $ sudo apt install haproxy + $ sudo apt install percona-haproxy ``` 2. The HAProxy configuration file path is: `/etc/haproxy/haproxy.cfg`. Specify the following configuration in this file. diff --git a/docs/solutions/ha-setup-yum.md b/docs/solutions/ha-setup-yum.md index 22aefa786..10e4688de 100644 --- a/docs/solutions/ha-setup-yum.md +++ b/docs/solutions/ha-setup-yum.md @@ -3,53 +3,95 @@ This guide provides instructions on how to set up a highly available PostgreSQL cluster with Patroni on Red Hat Enterprise Linux or CentOS. -## Preconditions +## Considerations -For this setup, we will use the nodes running on CentOS 8 as the base operating system and having the following IP addresses: +1. This is the example deployment suitable to be used for testing purposes in non-production environments. +2. In this setup ETCD resides on the same hosts as Patroni. In production, consider deploying ETCD cluster on dedicated hosts because ETCD writes every request from the cluster to disk which requires significant amount of disk space. See [hardware recommendations](https://etcd.io/docs/v3.6/op-guide/hardware/) for details. +3. For this setup, we use the nodes running on Red Hat Enterprise Linux 8 as the base operating system: + + | Node name | Application | IP address + |---------------|-------------------|-------------------- + | node1 | Patroni, PostgreSQL, ETCD | 10.104.0.1 + | node2 | Patroni, PostgreSQL, ETCD | 10.104.0.2 + | node3 | Patroni, PostgreSQL, ETCD | 10.104.0.3 + | HAProxy-demo | HAProxy | 10.104.0.6 -| Hostname | Public IP address | Internal IP address -|---------------|-------------------|-------------------- -| node1 | 157.230.42.174 | 10.104.0.7 -| node2 | 68.183.177.183 | 10.104.0.2 -| node3 | 165.22.62.167 | 10.104.0.8 -| etcd | 159.102.29.166 | 10.104.0.5 -| HAProxy-demo | 134.209.111.138 | 10.104.0.6 !!! note - In a production (or even non-production) setup, the PostgreSQL and ETCD nodes will be within a private subnet without any public connectivity to the Internet, and the HAProxy will be in a different subnet that allows client traffic coming only from a selected IP range. To keep things simple, we have implemented this architecture in a DigitalOcean VPS environment, and each node can access the other by its internal, private IP. + Ideally, in a production (or even non-production) setup, the PostgreSQL and ETCD nodes will be within a private subnet without any public connectivity to the Internet, and the HAProxy will be in a different subnet that allows client traffic coming only from a selected IP range. To keep things simple, we have implemented this architecture in a private environment, and each node can access the other by its internal, private IP. + +## Preparation + +### Set up hostnames in the `/etc/hosts` file + +It's not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other's names and allow their seamless communication. + +Modify the `/etc/hosts` file of each PostgreSQL node to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: + +=== "node1" + + ```text hl_lines="3 4" + # Cluster IP and names + 10.104.0.1 node1 + 10.104.0.2 node2 + 10.104.0.3 node3 + ``` + +=== "node2" + + ```text hl_lines="2 4" + # Cluster IP and names + 10.104.0.1 node1 + 10.104.0.2 node2 + 10.104.0.3 node3 + ``` + +=== "node3" + + ```text hl_lines="2 3" + # Cluster IP and names + 10.104.0.1 node1 + 10.104.0.2 node2 + 10.104.0.3 node3 + ``` + +=== "HAproxy-demo" + + The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: + + ```text hl_lines="4 5 6" + # Cluster IP and names + 10.104.0.6 HAProxy-demo + 10.104.0.1 node1 + 10.104.0.2 node2 + 10.104.0.3 node3 + ``` + +## Install Percona Distribution for PostgreSQL -## Setting up hostnames in the `/etc/hosts` file +Install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3` from Percona repository: -To make the nodes aware of each other and allow their seamless communication, resolve their hostnames to their public IP addresses. Modify the `/etc/hosts` file of each PostgreSQL node to include the hostnames and IP addresses of the remaining nodes. The following is the `/etc/hosts` file for `node1`: +1. [Install `percona-release`](https://www.percona.com/doc/percona-repo-config/installing.html). +2. Enable the repository: -``` -127.0.0.1 localhost node1 -10.104.0.7 node1 -10.104.0.2 node2 -10.104.0.8 node3 -``` + ```{.bash data-prompt="$"} + $ sudo percona-release setup ppg11 + ``` -The `/etc/hosts` file of the `HAProxy-demo` node hostnames and IP addresses of all PostgreSQL nodes: +3. [Install Percona Distribution for PostgreSQL packages](../installing.md#on-red-hat-enterprise-linux-and-centos-using-yum). -``` -127.0.1.1 HAProxy-demo HAProxy-demo -127.0.0.1 localhost -10.104.0.6 HAProxy-demo -10.104.0.7 node1 -10.104.0.2 node2 -10.104.0.8 node3 -``` +!!! important -Keep the `/etc/hosts` file of the `etcd` node unchanged. + **Don't** initialize the cluster and start the `postgresql` service. The cluster initialization and setup are handled by Patroni during the bootsrapping stage. ## Configure ETCD distributed store -The distributed configuration store helps establish a consensus among nodes during a failover and will manage the configuration for the three PostgreSQL instances. Although Patroni can work with other distributed consensus stores (i.e., Zookeeper, Consul, etc.), the most commonly used one is `etcd`. +The distributed configuration store provides a reliable way to store data that needs to be accessed by large scale distributed systems. The most popular implementation of the distributed configuration store is ETCD. ETCD is deployed as a cluster for fault-tolerance and requires an odd number of members (n/2+1) to agree on updates to the cluster state. An ETCD cluster helps establish a consensus among nodes during a failover and manages the configuration for the three PostgreSQL instances. -In this setup we will configure ETCD on a dedicated node. +The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command. The configuration is stored in the `/etc/etcd/etcd.conf` configuration file. -1. Install `etcd` on the ETCD node. For CentOS 8, the etcd packages are available from Percona repository: +1. Install `etcd` on every PostgreSQL node. For CentOS 8, the `etcd` packages are available from Percona repository: - [Install `percona-release`](https://www.percona.com/doc/percona-repo-config/installing.html). - Enable the repository: @@ -64,24 +106,31 @@ In this setup we will configure ETCD on a dedicated node. $ sudo yum install etcd python3-python-etcd ``` -2. Modify the `/etc/etcd/etcd.conf` configuration file: +2. Configure ETCD on `node1`. - ```text - [Member] - ETCD_DATA_DIR="/var/lib/etcd/default.etcd" - ETCD_LISTEN_PEER_URLS="http://10.104.0.5:2380,http://localhost:2380" - ETCD_LISTEN_CLIENT_URLS="http://10.104.0.5:2379,http://localhost:2379" + Backup the `etcd.conf` file: + + ```{.bash data-promp="$"} + sudo mv /etc/etcd/etcd.conf /etc/etcd/etcd.conf.orig + ``` + Modify the `/etc/etcd/etcd.conf` configuration file: - ETCD_NAME="default" - ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.5:2380" - ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.5:2379" - ETCD_INITIAL_CLUSTER="default=http://10.104.0.5:2380" - ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" - ETCD_INITIAL_CLUSTER_STATE="new" - ``` + ```text + [Member] + ETCD_DATA_DIR="/var/lib/etcd/default.etcd" + ETCD_LISTEN_PEER_URLS="http://10.104.0.1:2380,http://localhost:2380" + ETCD_LISTEN_CLIENT_URLS="http://10.104.0.1:2379,http://localhost:2379" -3. Start the `etcd` to apply the changes: + ETCD_NAME="node1" + ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.1:2380" + ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.1:2379" + ETCD_INITIAL_CLUSTER="node1=http://10.104.0.1:2380" + ETCD_INITIAL_CLUSTER_TOKEN="percona-etcd-cluster" + ETCD_INITIAL_CLUSTER_STATE="new" + ``` + +3. Start the `etcd` to apply the changes on `node1`: ```{.bash data-prompt="$"} $ sudo systemctl enable etcd @@ -89,7 +138,7 @@ In this setup we will configure ETCD on a dedicated node. $ sudo systemctl status etcd ``` -5. Check the etcd cluster members. +5. Check the etcd cluster members on `node1`. ```{.bash data-prompt="$"} $ sudo etcdctl member list @@ -97,26 +146,93 @@ In this setup we will configure ETCD on a dedicated node. The output resembles the following: - ``` + ```{.text .no-copy} 21d50d7f768f153a: name=default peerURLs=http://10.104.0.5:2380 clientURLs=http://10.104.0.5:2379 isLeader=true ``` -## Install Percona Distribution for PostgreSQL +6. Add `node2` to the cluster. Run the following command on `node1`: + + ```{.bash data-prompt="$"} + $ sudo etcdctl member add node2 http://10.104.0.2:2380 + ``` -Install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3` from Percona repository: + The output resembles the following one: + + ```{.text .no-copy} + Added member named node2 with ID 10042578c504d052 to cluster -1. [Install `percona-release`](https://www.percona.com/doc/percona-repo-config/installing.html). -2. Enable the repository: + ETCD_NAME="node2" + ETCD_INITIAL_CLUSTER="node2=http://10.104.0.2:2380,node1=http://10.104.0.1:2380" + ETCD_INITIAL_CLUSTER_STATE="existing" + ``` +7. Edit the `/etc/etcd/etcd.conf` configuration file on `node2` and add the output from the `add` command: + + ```text + [Member] + ETCD_NAME="node2" + ETCD_INITIAL_CLUSTER="node2=http://10.104.0.2:2380,node1=http://10.104.0.1:2380" + ETCD_INITIAL_CLUSTER_STATE="existing" + ETCD_DATA_DIR="/var/lib/etcd/default.etcd" + ETCD_INITIAL_CLUSTER_TOKEN="percona-etcd-cluster" + ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.2:2380" + ETCD_LISTEN_PEER_URLS="http://10.104.0.2:2380" + ETCD_LISTEN_CLIENT_URLS="http://10.104.0.2:2379,http://localhost:2379" + ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.2:2379" + ``` + +8. Start the `etcd` to apply the changes on `node2`: + + ```{.bash data-promp="$"} + $ sudo systemctl enable etcd + $ sudo systemctl start etcd + $ sudo systemctl status etcd + ``` + +9. Add `node3` to the cluster. Run the following command on `node1`: + ```{.bash data-prompt="$"} - $ sudo percona-release setup ppg12 + $ sudo etcdctl member add node3 http://10.104.0.3:2380 ``` -3. [Install Percona Distribution for PostgreSQL packages](../installing.md#on-red-hat-enterprise-linux-and-centos-using-yum). +10. Configure `etcd` on `node3`. Edit the `/etc/etcd/etcd.conf` configuration file on `node3` and add the output from the `add` command as follows: -!!! important + ```text + ETCD_NAME=node3 + ETCD_INITIAL_CLUSTER="node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380" + ETCD_INITIAL_CLUSTER_STATE="existing" + + ETCD_INITIAL_CLUSTER_TOKEN="percona-etcd-cluster" + ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.3:2380" + ETCD_DATA_DIR="/var/lib/etcd/postgresql" + ETCD_LISTEN_PEER_URLS="http://10.104.0.3:2380" + ETCD_LISTEN_CLIENT_URLS="http://10.104.0.3:2379,http://localhost:2379" + ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.3:2379" + … + ``` + +11. Start the `etcd` service on `node3`: + + ```{.bash data-prompt="$"} + $ sudo systemctl enable etcd + $ sudo systemctl start etcd + $ sudo systemctl status etcd + ``` + +12. Check the etcd cluster members. + + ```{.bash data-prompt="$"} + $ sudo etcdctl member list + ``` + + The output resembles the following: + + ``` + 2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false + 8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104.0.3:2379 isLeader=false + c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true + ``` - **Don't** initialize the cluster and start the `postgresql` service. The cluster initialization and setup are handled by Patroni during the bootsrapping stage. ## Configure Patroni @@ -141,89 +257,91 @@ Install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3` from $ sudo chown -R postgres:postgres /etc/patroni/ ``` - * Create the data directory for Patroni. Change its ownership to the `postgres` user and restrict the access to it + * Create the data directory to store PostgreSQL data. Change its ownership to the `postgres` user and restrict the access to it ```{.bash data-prompt="$"} - $ sudo mkdir /data/patroni -p - $ sudo chown -R postgres:postgres /data/patroni - $ sudo chmod 700 /data/patroni + $ sudo mkdir /data/pgsql -p + $ sudo chown -R postgres:postgres /data/pgsql + $ sudo chmod 700 /data/pgsql ``` -4. Create the `patroni.yml` configuration file. - - ```{.bash data-prompt="$"} - $ su postgres - $ vim /etc/patroni/patroni.yml - ``` - -5. Specify the following configuration: +4. Create the `/etc/patroni/patroni.yml` with the following configuration: ```yaml - scope: postgres - namespace: /pg_cluster/ + namespace: percona_lab + scope: cluster_1 name: node1 restapi: - listen: 10.104.0.7:8008 # PostgreSQL node IP address - connect_address: 10.104.0.7:8008 # PostgreSQL node IP address + listen: 10.104.0.7:8008 # PostgreSQL node IP address + connect_address: 10.104.0.7:8008 # PostgreSQL node IP address etcd: - host: 10.104.0.5:2379 # ETCD node IP address + host: 10.104.0.1:2379 # ETCD node IP address bootstrap: # this section will be written into Etcd:///config after initializing new cluster dcs: - ttl: 30 - loop_wait: 10 - retry_timeout: 10 - maximum_lag_on_failover: 1048576 - postgresql: - use_pg_rewind: true - use_slots: true - parameters: - wal_level: replica - hot_standby: "on" - logging_collector: 'on' - max_wal_senders: 5 - max_replication_slots: 5 - wal_log_hints: "on" - - # some desired options for 'initdb' - initdb: # Note: It needs to be a list (some options need values, others are switches) - - encoding: UTF8 - - data-checksums - - pg_hba: # Add following lines to pg_hba.conf after running 'initdb' - - host replication replicator 127.0.0.1/32 md5 - - host replication replicator 10.104.0.2/32 md5 - - host replication replicator 10.104.0.8/32 md5 - - host replication replicator 10.104.0.7/32 md5 - - host all all 0.0.0.0/0 md5 - # - hostssl all all 0.0.0.0/0 md5 - - # Some additional users users which needs to be created after initializing new cluster - users: - admin: - password: admin - options: - - createrole - - createdb + ttl: 30 + loop_wait: 10 + retry_timeout: 10 + maximum_lag_on_failover: 1048576 + slots: + percona_cluster_1: + type: physical + postgresql: + use_pg_rewind: true + use_slots: true + parameters: + wal_level: replica + hot_standby: "on" + wal_keep_segments: 10 + max_wal_senders: 5 + max_replication_slots: 10 + wal_log_hints: "on" + logging_collector: 'on' + # some desired options for 'initdb' + initdb: # Note: It needs to be a list (some options need values, others are switches) + - encoding: UTF8 + - data-checksums + pg_hba: # Add following lines to pg_hba.conf after running 'initdb' + - host replication replicator 127.0.0.1/32 trust + - host replication replicator 0.0.0.0/0 md5 + - host all all 0.0.0.0/0 md5 + - host all all ::0/0 md5 + # Some additional users which needs to be created after initializing new cluster + users: + admin: + password: qaz123 + options: + - createrole + - createdb + percona: + password: qaz123 + options: + - createrole + - createdb postgresql: - listen: 10.104.0.7:5432 # PostgreSQL node IP address - connect_address: 10.104.0.7:5432 # PostgreSQL node IP address - data_dir: /data/patroni # The datadir you created - bin_dir: /usr/pgsql-12/bin - pgpass: /tmp/pgpass0 + cluster_name: cluster_1 + listen: 0.0.0.0:5432 + connect_address: 10.104.0.1:5432 + data_dir: /data/pgsql + bin_dir: /usr/pgsql-14/bin + pgpass: /tmp/pgpass authentication: - replication: - username: replicator - password: replicator - superuser: - username: postgres - password: postgres + replication: + username: replicator + password: replPasswd + superuser: + username: postgres + password: qaz123 parameters: - unix_socket_directories: '.' + unix_socket_directories: "/var/run/postgresql/" + create_replica_methods: + - basebackup + basebackup: + checkpoint: 'fast' tags: nofailover: false @@ -232,9 +350,9 @@ Install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3` from nosync: false ``` -6. Create the configuration files for `node2` and `node3`. Replace the node and IP address of `node1` to those of `node2` and `node3`, respectively. +5. Create the configuration files for `node2` and `node3`. Replace the **node name and IP address** of `node1` to those of `node2` and `node3`, respectively. -7. Create the systemd unit file `patroni.service` in `/etc/systemd/system`. +6. Create the systemd unit file `patroni.service` in `/etc/systemd/system`. ```{.bash data-prompt="$"} $ sudo vim /etc/systemd/system/patroni.service @@ -272,7 +390,7 @@ Install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3` from WantedBy=multi-user.target ``` -8. Make `systemd` aware of the new service: +7. Make systemd aware of the new service: ```{.bash data-prompt="$"} $ sudo systemctl daemon-reload @@ -366,14 +484,14 @@ Install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3` from ## Configure HAProxy -HAProxy node will accept client connection requests and route those to the active node of the PostgreSQL cluster. This way, a client application doesn’t have to know what node in the underlying cluster is the current primary. All it needs to do is to access a single HAProxy URL and send its read/write requests there. Behind-the-scene, HAProxy routes the connection to a healthy node (as long as there is at least one healthy node available) and ensures that client application requests are never rejected. +HAproxy is the load balancer and the single point of entry to your PostgreSQL cluster for client applications. A client application accesses the HAPpoxy URL and sends its read/write requests there. Behind-the-scene, HAProxy routes write requests to the primary node and read requests - to the secondaries in a round-robin fashion so that no secondary instance is unnecessarily loaded. To make this happen, provide different ports in the HAProxy configuration file. In this deployment, writes are routed to port 5000 and reads - to port 5001 -HAProxy is capable of routing write requests to the primary node and read requests - to the secondaries in a round-robin fashion so that no secondary instance is unnecessarily loaded. To make this happen, provide different ports in the HAProxy configuration file. In this deployment, writes are routed to port 5000 and reads - to port 5001. +This way, a client application doesn’t know what node in the underlying cluster is the current primary. HAProxy sends connections to a healthy node (as long as there is at least one healthy node available) and ensures that client application requests are never rejected. 1. Install HAProxy on the `HAProxy-demo` node: ```{.bash data-prompt="$"} - $ sudo yum install haproxy + $ sudo yum install percona-haproxy ``` 2. The HAProxy configuration file path is: `/etc/haproxy/haproxy.cfg`. Specify the following configuration in this file. diff --git a/docs/solutions/high-availability.md b/docs/solutions/high-availability.md index 053a0c8a6..b8fb9a24e 100644 --- a/docs/solutions/high-availability.md +++ b/docs/solutions/high-availability.md @@ -6,11 +6,11 @@ - Cluster deployment - Testing the cluster -PostgreSQL has been widely adopted as a modern, high-performance transactional database. A highly available PostgreSQL cluster can withstand failures caused by network outages, resource saturation, hardware failures, operating system crashes, or unexpected reboots. Such cluster is often a critical component of the enterprise application landscape, where [four nines of availability](https://en.wikipedia.org/wiki/High_availability#Percentage_calculation) is a minimum requirement. +PostgreSQL has been widely adopted as a modern, high-performance transactional database. A highly available PostgreSQL cluster can withstand failures caused by network outages, resource saturation, hardware failures, operating system crashes or unexpected reboots. Such cluster is often a critical component of the enterprise application landscape, where [four nines of availability](https://en.wikipedia.org/wiki/High_availability#Percentage_calculation) is a minimum requirement. -This document provides instructions on how to set up and test a highly-available, single-primary, three-node cluster with Percona PostgreSQL and [Patroni](#patroni). +There are several methods to achieve high availability in PostgreSQL. In this description we use [Patroni](#patroni) - the open-source extension to facilitate and manage the deployment of high availability in PostgreSQL. -!!! admonition "High availability overview" +!!! admonition "High availability methods" There are a few methods for achieving high availability with PostgreSQL: @@ -18,10 +18,9 @@ This document provides instructions on how to set up and test a highly-available - file system replication, - trigger-based replication, - statement-based replication, - - logical replication, and - - Write-Ahead Log (WAL) shipping. - - In recent times, PostgreSQL high availability is most commonly achieved with [streaming replication](#streaming-replication). + - logical replication, + - Write-Ahead Log (WAL) shipping, + - [streaming replication](#streaming-replication). ## Streaming replication @@ -45,26 +44,48 @@ This document provides instructions on how to set up and test a highly-available ## Patroni -[Patroni](https://patroni.readthedocs.io/en/latest/) provides a template-based approach to create highly available PostgreSQL clusters. Running atop the PostgreSQL streaming replication process, it integrates with watchdog functionality to detect failed primary nodes and take corrective actions to prevent outages. Patroni also provides a pluggable configuration store to manage distributed, multi-node cluster configuration and comes with REST APIs to monitor and manage the cluster. There is also a command-line utility called _patronictl_ that helps manage switchovers and failure scenarios. +[Patroni](https://patroni.readthedocs.io/en/latest/) provides a template-based approach to create highly available PostgreSQL clusters. Running atop the PostgreSQL streaming replication process, it integrates with watchdog functionality to detect failed primary nodes and take corrective actions to prevent outages. Patroni also relies on a pluggable configuration store to manage distributed, multi-node cluster configuration and store the information about the cluster health there. Patroni comes with REST APIs to monitor and manage the cluster and has a command-line utility called _patronictl_ that helps manage switchovers and failure scenarios. + +### Key benefits of Patroni: + +* Continuous monitoring and automatic failover +* Manual/scheduled switchover with a single command +* Built-in automation for bringing back a failed node to cluster again. +* REST APIs for entire cluster configuration and further tooling. +* Provides infrastructure for transparent application failover +* Distributed consensus for every action and configuration. +* Integration with Linux watchdog for avoiding split-brain syndrome. ## Architecture layout The following diagram shows the architecture of a three-node PostgreSQL cluster with a single-leader node. -![Architecture of the three-node, single primary PostgreSQL cluster](../_images/diagrams/patroni-architecture.png) +![Architecture of the three-node, single primary PostgreSQL cluster](../_images/diagrams/ha-architecture-patroni.png) ### Components -The following are the components: +The components in this architecture are: + +- PostgreSQL nodes +- Patroni provides a template for configuring a highly available PostgreSQL cluster. + +- ETCD is a Distributed Configuration store that stores the state of the PostgreSQL cluster. + +- HAProxy is the load balancer for the cluster and is the single point of entry to client applications. + +- Softdog - a watchdog utility which is used by Patroni to check the nodes' health. Watchdog resets the whole system when it doesn't receive a keepalive heartbeat within a specified time. + +### How components work together + +Each PostgreSQL instance in the cluster maintains consistency with other members through streaming replication. Each instance hosts Patroni - a cluster manager that monitors the cluster health. Patroni relies on the operational ETCD cluster to store the cluster configuration and sensitive data about the cluster health there. + +Patroni periodically sends heartbeat requests with the cluster status to ETCD. ETCD writes this information to disk and sends the response back to Patroni. If the current primary fails to renew its status as leader within the specified timeout, Patroni updates the state change in ETCD, which uses this information to elect the new primary and keep the cluster up and running. -- Three PosgreSQL nodes: `node1`, `node2` and `node3` -- A dedicated HAProxy node `HAProxy-demo`. HAProxy is an open-source load balancing software through which client connections to the cluster are routed. -- ETCD - a distributed configuration storage -- Softdog - a watchdog utility which is used to detect unhealthy nodes in an acceptable time frame. +The connections to the cluster do not happen directly to the database nodes but are routed via a connection proxy like HAProxy. This proxy determines the active node by querying the Patroni REST API. ## Deployment -Use the links below to navigate to the setup instructions relevant to your operating system +Use the following links to navigate to the setup instructions relevant to your operating system - [Deploy on Debian or Ubuntu](ha-setup-apt.md) - [Deploy on Red Hat Enterprise Linux or CentOS](ha-setup-yum.md)